WorldWideScience

Sample records for greatly improved accuracy

  1. Improving shuffler assay accuracy

    International Nuclear Information System (INIS)

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  2. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    International Nuclear Information System (INIS)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki; Ota, Keishin; Matsuda, Tsuyoshi; Tonomura, Akira

    2012-01-01

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: ► A modified phase-shifting electron holography was proposed. ► The time variation of mean intensity and contrast of holograms were corrected. ► These corrections lead to a great improvement of the resultant phase accuracy. ► A phase accuracy of about 1/4000 rad was achieved from experimental results.

  3. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Ota, Keishin, E-mail: ota@microphase.co.jp [Microphase Co., Ltd., Onigakubo 1147-9, Tsukuba, Ibaragi 300-2651 (Japan); Matsuda, Tsuyoshi [Japan Science and Technology Agency, Kawaguchi-shi, Saitama 332-0012 (Japan); Tonomura, Akira [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Okinawa Institute of Science and Technology, Graduate University, Kunigami, Okinawa 904-0495 (Japan); Central Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama 350-0395 (Japan)

    2012-07-15

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: Black-Right-Pointing-Pointer A modified phase-shifting electron holography was proposed. Black-Right-Pointing-Pointer The time variation of mean intensity and contrast of holograms were corrected. Black-Right-Pointing-Pointer These corrections lead to a great improvement of the resultant phase accuracy. Black-Right-Pointing-Pointer A phase accuracy of about 1/4000 rad was achieved from experimental results.

  4. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  5. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    Directory of Open Access Journals (Sweden)

    Mingjun Deng

    2017-12-01

    Full Text Available The Chinese Gaofen-3 (GF-3 mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method.

  6. Optical vector network analyzer with improved accuracy based on polarization modulation and polarization pulling.

    Science.gov (United States)

    Li, Wei; Liu, Jian Guo; Zhu, Ning Hua

    2015-04-15

    We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.

  7. Accuracy Improvement of Boron Meter Adopting New Fitting Function and Multi-Detector

    Directory of Open Access Journals (Sweden)

    Chidong Kong

    2016-12-01

    Full Text Available This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  8. Accuracy improvement of boron meter adopting new fitting function and multi-detector

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Chidong; Lee, Hyun Suk; Tak, Tae Woo; Lee, Deok Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); KIm, Si Hwan; Lyou, Seok Jean [Users Incorporated Company, Hansin S-MECA, Daejeon (Korea, Republic of)

    2016-12-15

    This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs) in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  9. Improving Accuracy of Processing Through Active Control

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available An important task of modern mathematical statistics with its methods based on the theory of probability is a scientific estimate of measurement results. There are certain costs under control, and under ineffective control when a customer has got defective products these costs are significantly higher because of parts recall.When machining the parts, under the influence of errors a range scatter of part dimensions is offset towards the tolerance limit. To improve a processing accuracy and avoid defective products involves reducing components of error in machining, i.e. to improve the accuracy of machine and tool, tool life, rigidity of the system, accuracy of the adjustment. In a given time it is also necessary to adapt machine.To improve an accuracy and a machining rate there, currently  become extensively popular various the in-process gaging devices and controlled machining that uses adaptive control systems for the process monitoring. Improving the accuracy in this case is compensation of a majority of technological errors. The in-cycle measuring sensors (sensors of active control allow processing accuracy improvement by one or two quality and provide a capability for simultaneous operation of several machines.Efficient use of in-cycle measuring sensors requires development of methods to control the accuracy through providing the appropriate adjustments. Methods based on the moving average, appear to be the most promising for accuracy control since they include data on the change in some last measured values of the parameter under control.

  10. Geometric Positioning Accuracy Improvement of ZY-3 Satellite Imagery Based on Statistical Learning Theory

    Directory of Open Access Journals (Sweden)

    Niangang Jiao

    2018-05-01

    Full Text Available With the increasing demand for high-resolution remote sensing images for mapping and monitoring the Earth’s environment, geometric positioning accuracy improvement plays a significant role in the image preprocessing step. Based on the statistical learning theory, we propose a new method to improve the geometric positioning accuracy without ground control points (GCPs. Multi-temporal images from the ZY-3 satellite are tested and the bias-compensated rational function model (RFM is applied as the block adjustment model in our experiment. An easy and stable weight strategy and the fast iterative shrinkage-thresholding (FIST algorithm which is widely used in the field of compressive sensing are improved and utilized to define the normal equation matrix and solve it. Then, the residual errors after traditional block adjustment are acquired and tested with the newly proposed inherent error compensation model based on statistical learning theory. The final results indicate that the geometric positioning accuracy of ZY-3 satellite imagery can be improved greatly with our proposed method.

  11. THE THIRD GRAVITATIONAL LENSING ACCURACY TESTING (GREAT3) CHALLENGE HANDBOOK

    International Nuclear Information System (INIS)

    Mandelbaum, Rachel; Kannawadi, Arun; Simet, Melanie; Rowe, Barnaby; Kacprzak, Tomasz; Bosch, James; Miyatake, Hironao; Chang, Chihway; Gill, Mandeep; Courbin, Frederic; Jarvis, Mike; Armstrong, Bob; Lackner, Claire; Leauthaud, Alexie; Nakajima, Reiko; Rhodes, Jason; Zuntz, Joe; Bridle, Sarah; Coupon, Jean; Dietrich, Jörg P.

    2014-01-01

    The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is the third in a series of image analysis challenges, with a goal of testing and facilitating the development of methods for analyzing astronomical images that will be used to measure weak gravitational lensing. This measurement requires extremely precise estimation of very small galaxy shape distortions, in the presence of far larger intrinsic galaxy shapes and distortions due to the blurring kernel caused by the atmosphere, telescope optics, and instrumental effects. The GREAT3 challenge is posed to the astronomy, machine learning, and statistics communities, and includes tests of three specific effects that are of immediate relevance to upcoming weak lensing surveys, two of which have never been tested in a community challenge before. These effects include many novel aspects including realistically complex galaxy models based on high-resolution imaging from space; a spatially varying, physically motivated blurring kernel; and a combination of multiple different exposures. To facilitate entry by people new to the field, and for use as a diagnostic tool, the simulation software for the challenge is publicly available, though the exact parameters used for the challenge are blinded. Sample scripts to analyze the challenge data using existing methods will also be provided. See http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/ for more information

  12. Audiovisual biofeedback improves motion prediction accuracy.

    Science.gov (United States)

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  13. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    Science.gov (United States)

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  14. Improving coding accuracy in an academic practice.

    Science.gov (United States)

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  15. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Science.gov (United States)

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  16. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  17. Improving Accuracy of Processing by Adaptive Control Techniques

    Directory of Open Access Journals (Sweden)

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  18. Study on the Accuracy Improvement of the Second-Kind Fredholm Integral Equations by Using the Buffa-Christiansen Functions with MLFMA

    Directory of Open Access Journals (Sweden)

    Yue-Qian Wu

    2016-01-01

    Full Text Available Former works show that the accuracy of the second-kind integral equations can be improved dramatically by using the rotated Buffa-Christiansen (BC functions as the testing functions, and sometimes their accuracy can be even better than the first-kind integral equations. When the rotated BC functions are used as the testing functions, the discretization error of the identity operators involved in the second-kind integral equations can be suppressed significantly. However, the sizes of spherical objects which were analyzed are relatively small. Numerical capability of the method of moments (MoM for solving integral equations with the rotated BC functions is severely limited. Hence, the performance of BC functions for accuracy improvement of electrically large objects is not studied. In this paper, the multilevel fast multipole algorithm (MLFMA is employed to accelerate iterative solution of the magnetic-field integral equation (MFIE. Then a series of numerical experiments are performed to study accuracy improvement of MFIE in perfect electric conductor (PEC cases with the rotated BC as testing functions. Numerical results show that the effect of accuracy improvement by using the rotated BC as the testing functions is greatly different with curvilinear or plane triangular elements but falls off when the size of the object is large.

  19. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    Science.gov (United States)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  20. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  1. Accuracy Improvement of Real-Time Location Tracking for Construction Workers

    Directory of Open Access Journals (Sweden)

    Hyunsoo Kim

    2018-05-01

    Full Text Available Extensive research has been conducted on the real-time locating system (RTLS for tracking construction components, including workers, equipment, and materials, in order to improve construction performance (e.g., productivity improvement or accident prevention. In order to prevent safety accidents and make more sustainable construction job sites, the higher accuracy of RTLS is required. To improve the accuracy of RTLS in construction projects, this paper presents a RTLS using radio frequency identification (RFID. For this goal, this paper develops a location tracking error mitigation algorithm and presents the concept of using assistant tags. The applicability and effectiveness of the developed RTLS are tested under eight different construction environments and the test results confirm the system’s strong potential for improving the accuracy of real-time location tracking in construction projects, thus enhancing construction performance.

  2. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Science.gov (United States)

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  3. Statewide Quality Improvement Initiative to Reduce Early Elective Deliveries and Improve Birth Registry Accuracy.

    Science.gov (United States)

    Kaplan, Heather C; King, Eileen; White, Beth E; Ford, Susan E; Fuller, Sandra; Krew, Michael A; Marcotte, Michael P; Iams, Jay D; Bailit, Jennifer L; Bouchard, Jo M; Friar, Kelly; Lannon, Carole M

    2018-04-01

    To evaluate the success of a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data accuracy rapidly and at scale in Ohio. Between February 2013 and March 2014, participating hospitals were involved in a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data. This initiative was designed as a learning collaborative model (group webinars and a single face-to-face meeting) and included individual quality improvement coaching. It was implemented using a stepped wedge design with hospitals divided into three balanced groups (waves) participating in the initiative sequentially. Birth registry data were used to assess hospital rates of nonmedically indicated inductions at less than 39 weeks of gestation. Comparisons were made between groups participating and those not participating in the initiative at two time points. To measure birth registry accuracy, hospitals conducted monthly audits comparing birth registry data with the medical record. Associations were assessed using generalized linear repeated measures models accounting for time effects. Seventy of 72 (97%) eligible hospitals participated. Based on birth registry data, nonmedically indicated inductions at less than 39 weeks of gestation declined in all groups with implementation (wave 1: 6.2-3.2%, Pinitiative, they saw significant decreases in rates of early elective deliveries as compared with wave 3 (control; P=.018). All waves had significant improvement in birth registry accuracy (wave 1: 80-90%, P=.017; wave 2: 80-100%, P=.002; wave 3: 75-100%, Pinitiative enabled statewide spread of change strategies to decrease early elective deliveries and improve birth registry accuracy over 14 months and could be used for rapid dissemination of other evidence-based obstetric care practices across states or hospital systems.

  4. Systematic review of discharge coding accuracy

    Science.gov (United States)

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  5. Does a Structured Data Collection Form Improve The Accuracy of ...

    African Journals Online (AJOL)

    and multiple etiologies for similar presentation. Standardized forms may harmonize the initial assessment, improve accuracy of diagnosis and enhance outcomes. Objectives: To determine the extent to which use of a structured data collection form (SDCF) affected the diagnostic accuracy of AAP. Methodology: A before and ...

  6. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    Science.gov (United States)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  7. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  8. Can Translation Improve EFL Students' Grammatical Accuracy? [

    Directory of Open Access Journals (Sweden)

    Carol Ebbert-Hübner

    2018-01-01

    Full Text Available This report focuses on research results from a project completed at Trier University in December 2015 that provides insight into whether a monolingual group of learners can improve their grammatical accuracy and reduce interference mistakes in their English via contrastive analysis and translation instruction and activities. Contrastive analysis and translation (CAT instruction in this setting focusses on comparing grammatical differences between students’ dominant language (German and English, and practice activities where sentences or short texts are translated from German into English. The results of a pre- and post-test administered in the first and final week of a translation class were compared to two other class types: a grammar class which consisted of form-focused instruction but not translation, and a process-approach essay writing class where students received feedback on their written work throughout the semester. The results of our study indicate that with C1 level EAP students, more improvement in grammatical accuracy is seen through teaching with CAT than in explicit grammar instruction or through language feedback on written work alone. These results indicate that CAT does indeed have a place in modern language classes.

  9. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    Science.gov (United States)

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  10. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  11. Great Ellipse Route Planning Based on Space Vector

    Directory of Open Access Journals (Sweden)

    LIU Wenchao

    2015-07-01

    Full Text Available Aiming at the problem of navigation error caused by unified earth model in great circle route planning using sphere model and modern navigation equipment using ellipsoid mode, a method of great ellipse route planning based on space vector is studied. By using space vector algebra method, the vertex of great ellipse is solved directly, and description of great ellipse based on major-axis vector and minor-axis vector is presented. Then calculation formulas of great ellipse azimuth and distance are deduced using two basic vectors. Finally, algorithms of great ellipse route planning are studied, especially equal distance route planning algorithm based on Newton-Raphson(N-R method. Comparative examples show that the difference of route planning between great circle and great ellipse is significant, using algorithms of great ellipse route planning can eliminate the navigation error caused by the great circle route planning, and effectively improve the accuracy of navigation calculation.

  12. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  13. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  14. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  15. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    OpenAIRE

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course corr...

  16. IMPROVED MOTOR-TIMING: EFFECTS OF SYNCHRONIZED METRO-NOME TRAINING ON GOLF SHOT ACCURACY

    Directory of Open Access Journals (Sweden)

    Louise Rönnqvist

    2009-12-01

    Full Text Available This study investigates the effect of synchronized metronome training (SMT on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6 in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study's results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy

  17. Accuracy improvements of gyro-based measurement-while-drilling surveying instruments by a laser testing method

    Science.gov (United States)

    Li, Rong; Zhao, Jianhui; Li, Fan

    2009-07-01

    Gyroscope used as surveying sensor in the oil industry has been proposed as a good technique for measurement-whiledrilling (MWD) to provide real-time monitoring of the position and the orientation of the bottom hole assembly (BHA).However, drifts in the measurements provided by gyroscope might be prohibitive for the long-term utilization of the sensor. Some usual methods such as zero velocity update procedure (ZUPT) introduced to limit these drifts seem to be time-consuming and with limited effect. This study explored an in-drilling dynamic -alignment (IDA) method for MWD which utilizes gyroscope. During a directional drilling process, there are some minutes in the rotary drilling mode when the drill bit combined with drill pipe are rotated about the spin axis in a certain speed. This speed can be measured and used to determine and limit some drifts of the gyroscope which pay great effort to the deterioration in the long-term performance. A novel laser assembly is designed on the wellhead to count the rotating cycles of the drill pipe. With this provided angular velocity of the drill pipe, drifts of gyroscope measurements are translated into another form that can be easy tested and compensated. That allows better and faster alignment and limited drifts during the navigation process both of which can reduce long-term navigation errors, thus improving the overall accuracy in INS-based MWD system. This article concretely explores the novel device on the wellhead designed to test the rotation of the drill pipe. It is based on laser testing which is simple and not expensive by adding a laser emitter to the existing drilling equipment. Theoretical simulations and analytical approximations exploring the IDA idea have shown improvement in the accuracy of overall navigation and reduction in the time required to achieve convergence. Gyroscope accuracy along the axis is mainly improved. It is suggested to use the IDA idea in the rotary mode for alignment. Several other

  18. Improved Accuracy of Density Functional Theory Calculations for CO2 Reduction and Metal-Air Batteries

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Density functional theory (DFT) calculations have greatly contributed to the atomic level understanding of electrochemical reactions. However, in some cases, the accuracy can be prohibitively low for a detailed understanding of, e.g. reaction mechanisms. Two cases are examined here, i.e. the elec......Density functional theory (DFT) calculations have greatly contributed to the atomic level understanding of electrochemical reactions. However, in some cases, the accuracy can be prohibitively low for a detailed understanding of, e.g. reaction mechanisms. Two cases are examined here, i.......47 eV and 0.17 eV using metals as reference. The presented approach for error identification is expected to be applicable to a very broad range of systems. References: [1] A. A. Peterson, F. Abild-Pedersen, F. Studt, J. Rossmeisl, and J. K. Nørskov, Energy Environ. Sci., 3,1311 (2010) [2] F. Studt, F...

  19. A model to improve the accuracy of US Poison Center data collection.

    Science.gov (United States)

    Krenzelok, E P; Reynolds, K M; Dart, R C; Green, J L

    2014-01-01

    Over 2 million human exposure calls are reported annually to United States regional poison information centers. All exposures are documented electronically and submitted to the American Association of Poison Control Center's National Poison Data System. This database represents the largest data source available on the epidemiology of pharmaceutical and non-pharmaceutical poisoning exposures. The accuracy of these data is critical; however, research has demonstrated that inconsistencies and inaccuracies exist. This study outlines the methods and results of a training program that was developed and implemented to enhance the quality of data collection using acetaminophen exposures as a model. Eleven poison centers were assigned randomly to receive either passive or interactive education to improve medical record documentation. A task force provided recommendations on educational and training strategies and the development of a quality-measurement scorecard to serve as a data collection tool to assess poison center data quality. Poison centers were recruited to participate in the study. Clinical researchers scored the documentation of each exposure record for accuracy. Results. Two thousand two hundred cases were reviewed and assessed for accuracy of data collection. After training, the overall mean quality scores were higher for both the passive (95.3%; + 1.6% change) and interactive intervention groups (95.3%; + 0.9% change). Data collection accuracy improved modestly for the overall accuracy score and significantly for the substance identification component. There was little difference in accuracy measures between the different training methods. Despite the diversity of poison centers, data accuracy, specifically substance identification data fields, can be improved by developing a standardized, systematic, targeted, and mandatory training process. This process should be considered for training on other important topics, thus enhancing the value of these data in

  20. Three-dimensional display improves observer speed and accuracy

    International Nuclear Information System (INIS)

    Nelson, J.A.; Rowberg, A.H.; Kuyper, S.; Choi, H.S.

    1989-01-01

    In an effort to evaluate the potential cost-effectiveness of three-dimensional (3D) display equipment, we compared the speed and accuracy of experienced radiologists identifying in sliced uppercase letters from CT scans with 2D and pseudo-3D display. CT scans of six capital letters were obtained and printed as a 2D display or as a synthesized pseudo-3D display (Pixar). Six observes performed a timed identification task. Radiologists read the 3D display an average of 16 times faster than the 2D, and the average error rate of 2/6 (± 0.6/6) for 2D interpretations was totally eliminated. This degree of improvement in speed and accuracy suggests that the expense of 3D display may be cost-effective in a defined clinical setting

  1. EpCAM-based flow cytometry in cerebrospinal fluid greatly improves diagnostic accuracy of leptomeningeal metastases from epithelial tumors

    NARCIS (Netherlands)

    Milojkovic Kerklaan, B.; Pluim, Dick; Bol, Mijke; Hofland, Ingrid; Westerga, Johan; van Tinteren, Harm; Beijnen, Jos H; Boogerd, Willem; Schellens, Jan H M; Brandsma, Dieta

    BACKGROUND: Moderate diagnostic accuracy of MRI and initial cerebrospinal fluid (CSF) cytology analysis results in at least 10%-15% false negative diagnoses of leptomeningeal metastases (LM) of solid tumors, thus postponing start of therapy. The aim of this prospective clinical study was to

  2. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  3. Accuracy improvement of irradiation data by combining ground and satellite measurements

    Energy Technology Data Exchange (ETDEWEB)

    Betcke, J. [Energy and Semiconductor Research Laboratory, Carl von Ossietzky University, Oldenburg (Germany); Beyer, H.G. [Department of Electrical Engineering, University of Applied Science (F.H.) Magdeburg-Stendal, Magdeburg (Germany)

    2004-07-01

    Accurate and site-specific irradiation data are essential input for optimal planning, monitoring and operation of solar energy technologies. A concrete example is the performance check of grid connected PV systems with the PVSAT-2 procedure. This procedure detects system faults in an early stage by a daily comparison of an individual reference yield with the actual yield. Calculation of the reference yield requires hourly irradiation data with a known accuracy. A field test of the predecessing PVSAT-1 procedure showed that the accuracy of the irradiation input is the determining factor for the overall accuracy of the yield calculation. In this paper we will investigate if it is possible to improve the accuracy of sitespeci.c irradiation data by combining accurate localised pyranometer data with semi-continuous satellite data.We will therefore introduce the ''Kriging of Differences'' data fusion method. Kriging of Differences also offers the possibility to estimate it's own accuracy. The obtainable accuracy gain and the effectiveness of the accuracy prediction will be investigated by validation on monthly and daily irradiation datasets. Results will be compared with the Heliosat method and interpolation of ground data. (orig.)

  4. How patients can improve the accuracy of their medical records.

    Science.gov (United States)

    Dullabh, Prashila M; Sondheimer, Norman K; Katsh, Ethan; Evans, Michael A

    2014-01-01

    Assess (1) if patients can improve their medical records' accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback's impact on medical record accuracy. Improving medical record' accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records' accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists' accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists' medication reconciliation logs. (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency-or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms, pharmacists responded positively to 68 percent of patient requests for

  5. Improving orbit prediction accuracy through supervised machine learning

    Science.gov (United States)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  6. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  7. Iterative metal artifact reduction improves dose calculation accuracy. Phantom study with dental implants

    Energy Technology Data Exchange (ETDEWEB)

    Maerz, Manuel; Mittermair, Pia; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany); Krauss, Andreas [Siemens Healthcare GmbH, Forchheim (Germany)

    2016-06-15

    Metallic dental implants cause severe streaking artifacts in computed tomography (CT) data, which affect the accuracy of dose calculations in radiation therapy. The aim of this study was to investigate the benefit of the metal artifact reduction algorithm iterative metal artifact reduction (iMAR) in terms of correct representation of Hounsfield units (HU) and dose calculation accuracy. Heterogeneous phantoms consisting of different types of tissue equivalent material surrounding metallic dental implants were designed. Artifact-containing CT data of the phantoms were corrected using iMAR. Corrected and uncorrected CT data were compared to synthetic CT data to evaluate accuracy of HU reproduction. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated in Oncentra v4.3 on corrected and uncorrected CT data and compared to Gafchromic trademark EBT3 films to assess accuracy of dose calculation. The use of iMAR increased the accuracy of HU reproduction. The average deviation of HU decreased from 1006 HU to 408 HU in areas including metal and from 283 HU to 33 HU in tissue areas excluding metal. Dose calculation accuracy could be significantly improved for all phantoms and plans: The mean passing rate for gamma evaluation with 3 % dose tolerance and 3 mm distance to agreement increased from 90.6 % to 96.2 % if artifacts were corrected by iMAR. The application of iMAR allows metal artifacts to be removed to a great extent which leads to a significant increase in dose calculation accuracy. (orig.) [German] Metallische Implantate verursachen streifenfoermige Artefakte in CT-Bildern, welche die Dosisberechnung beeinflussen. In dieser Studie soll der Nutzen des iterativen Metall-Artefakt-Reduktions-Algorithmus iMAR hinsichtlich der Wiedergabetreue von Hounsfield-Werten (HU) und der Genauigkeit von Dosisberechnungen untersucht werden. Es wurden heterogene Phantome aus verschiedenen Arten gewebeaequivalenten Materials mit

  8. The Improvement of Behavior Recognition Accuracy of Micro Inertial Accelerometer by Secondary Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-05-01

    Full Text Available Behaviors of “still”, “walking”, “running”, “jumping”, “upstairs” and “downstairs” can be recognized by micro inertial accelerometer of low cost. By using the features as inputs to the well-trained BP artificial neural network which is selected as classifier, those behaviors can be recognized. But the experimental results show that the recognition accuracy is not satisfactory. This paper presents secondary recognition algorithm and combine it with BP artificial neural network to improving the recognition accuracy. The Algorithm is verified by the Android mobile platform, and the recognition accuracy can be improved more than 8 %. Through extensive testing statistic analysis, the recognition accuracy can reach 95 % through BP artificial neural network and the secondary recognition, which is a reasonable good result from practical point of view.

  9. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  10. Improvement on the accuracy of beam bugs in linear induction accelerator

    International Nuclear Information System (INIS)

    Xie Yutong; Dai Zhiyong; Han Qing

    2002-01-01

    In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree

  11. Training readers to improve their accuracy in grading Crohn's disease activity on MRI

    International Nuclear Information System (INIS)

    Tielbeek, Jeroen A.W.; Bipat, Shandra; Boellaard, Thierry N.; Nio, C.Y.; Stoker, Jaap

    2014-01-01

    To prospectively evaluate if training with direct feedback improves grading accuracy of inexperienced readers for Crohn's disease activity on magnetic resonance imaging (MRI). Thirty-one inexperienced readers assessed 25 cases as a baseline set. Subsequently, all readers received training and assessed 100 cases with direct feedback per case, randomly assigned to four sets of 25 cases. The cases in set 4 were identical to the baseline set. Grading accuracy, understaging, overstaging, mean reading times and confidence scores (scale 0-10) were compared between baseline and set 4, and between the four consecutive sets with feedback. Proportions of grading accuracy, understaging and overstaging per set were compared using logistic regression analyses. Mean reading times and confidence scores were compared by t-tests. Grading accuracy increased from 66 % (95 % CI, 56-74 %) at baseline to 75 % (95 % CI, 66-81 %) in set 4 (P = 0.003). Understaging decreased from 15 % (95 % CI, 9-23 %) to 7 % (95 % CI, 3-14 %) (P < 0.001). Overstaging did not change significantly (20 % vs 19 %). Mean reading time decreased from 6 min 37 s to 4 min 35 s (P < 0.001). Mean confidence increased from 6.90 to 7.65 (P < 0.001). During training, overall grading accuracy, understaging, mean reading times and confidence scores improved gradually. Inexperienced readers need training with at least 100 cases to achieve the literature reported grading accuracy of 75 %. (orig.)

  12. Improving Accuracy of Intrusion Detection Model Using PCA and optimized SVM

    Directory of Open Access Journals (Sweden)

    Sumaiya Thaseen Ikram

    2016-06-01

    Full Text Available Intrusion detection is very essential for providing security to different network domains and is mostly used for locating and tracing the intruders. There are many problems with traditional intrusion detection models (IDS such as low detection capability against unknown network attack, high false alarm rate and insufficient analysis capability. Hence the major scope of the research in this domain is to develop an intrusion detection model with improved accuracy and reduced training time. This paper proposes a hybrid intrusiondetection model by integrating the principal component analysis (PCA and support vector machine (SVM. The novelty of the paper is the optimization of kernel parameters of the SVM classifier using automatic parameter selection technique. This technique optimizes the punishment factor (C and kernel parameter gamma (γ, thereby improving the accuracy of the classifier and reducing the training and testing time. The experimental results obtained on the NSL KDD and gurekddcup dataset show that the proposed technique performs better with higher accuracy, faster convergence speed and better generalization. Minimum resources are consumed as the classifier input requires reduced feature set for optimum classification. A comparative analysis of hybrid models with the proposed model is also performed.

  13. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Directory of Open Access Journals (Sweden)

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  14. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, A. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Ranallo, F. N. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Judy, P. F. [Brigham and Women’s Hospital, Boston, Massachusetts 02115 (United States); Gierada, D. S. [Department of Radiology, Washington University, St. Louis, Missouri 63110 (United States); Fain, S. B., E-mail: sfain@wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering,University of Wisconsin School of Engineering, Madison, Wisconsin 53706 (United States)

    2014-11-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

  15. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    International Nuclear Information System (INIS)

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-01-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel

  16. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    Science.gov (United States)

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  17. Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.

    Science.gov (United States)

    Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C

    2016-11-01

    To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  18. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  19. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    International Nuclear Information System (INIS)

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  20. Improving substructure identification accuracy of shear structures using virtual control system

    Science.gov (United States)

    Zhang, Dongyu; Yang, Yang; Wang, Tingqiang; Li, Hui

    2018-02-01

    Substructure identification is a powerful tool to identify the parameters of a complex structure. Previously, the authors developed an inductive substructure identification method for shear structures. The identification error analysis showed that the identification accuracy of this method is significantly influenced by the magnitudes of two key structural responses near a certain frequency; if these responses are unfavorable, the method cannot provide accurate estimation results. In this paper, a novel method is proposed to improve the substructure identification accuracy by introducing a virtual control system (VCS) into the structure. A virtual control system is a self-balanced system, which consists of some control devices and a set of self-balanced forces. The self-balanced forces counterbalance the forces that the control devices apply on the structure. The control devices are combined with the structure to form a controlled structure used to replace the original structure in the substructure identification; and the self-balance forces are treated as known external excitations to the controlled structure. By optimally tuning the VCS’s parameters, the dynamic characteristics of the controlled structure can be changed such that the original structural responses become more favorable for the substructure identification and, thus, the identification accuracy is improved. A numerical example of 6-story shear structure is utilized to verify the effectiveness of the VCS based controlled substructure identification method. Finally, shake table tests are conducted on a 3-story structural model to verify the efficacy of the VCS to enhance the identification accuracy of the structural parameters.

  1. How social information can improve estimation accuracy in human groups.

    Science.gov (United States)

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  2. Improvement of vision measurement accuracy using Zernike moment based edge location error compensation model

    International Nuclear Information System (INIS)

    Cui, J W; Tan, J B; Zhou, Y; Zhang, H

    2007-01-01

    This paper presents the Zernike moment based model developed to compensate edge location errors for further improvement of the vision measurement accuracy by compensating the slight changes resulting from sampling and establishing mathematic expressions for subpixel location of theoretical and actual edges which are either vertical to or at an angle with X-axis. Experimental results show that the proposed model can be used to achieve a vision measurement accuracy of up to 0.08 pixel while the measurement uncertainty is less than 0.36μm. It is therefore concluded that as a model which can be used to achieve a significant improvement of vision measurement accuracy, the proposed model is especially suitable for edge location of images with low contrast

  3. Evaluation of scanning 2D barcoded vaccines to improve data accuracy of vaccines administered.

    Science.gov (United States)

    Daily, Ashley; Kennedy, Erin D; Fierro, Leslie A; Reed, Jenica Huddleston; Greene, Michael; Williams, Warren W; Evanson, Heather V; Cox, Regina; Koeppl, Patrick; Gerlach, Ken

    2016-11-11

    Accurately recording vaccine lot number, expiration date, and product identifiers, in patient records is an important step in improving supply chain management and patient safety in the event of a recall. These data are being encoded on two-dimensional (2D) barcodes on most vaccine vials and syringes. Using electronic vaccine administration records, we evaluated the accuracy of lot number and expiration date entered using 2D barcode scanning compared to traditional manual or drop-down list entry methods. We analyzed 128,573 electronic records of vaccines administered at 32 facilities. We compared the accuracy of records entered using 2D barcode scanning with those entered using traditional methods using chi-square tests and multilevel logistic regression. When 2D barcodes were scanned, lot number data accuracy was 1.8 percentage points higher (94.3-96.1%, Pmanufacturer, month vaccine was administered, and vaccine type were associated with variation in accuracy for both lot number and expiration date. Two-dimensional barcode scanning shows promise for improving data accuracy of vaccine lot number and expiration date records. Adapting systems to further integrate with 2D barcoding could help increase adoption of 2D barcode scanning technology. Published by Elsevier Ltd.

  4. Algorithms and parameters for improved accuracy in physics data libraries

    International Nuclear Information System (INIS)

    Batič, M; Hoff, G; Pia, M G; Saracco, P; Han, M; Kim, C H; Hauf, S; Kuster, M; Seo, H

    2012-01-01

    Recent efforts for the improvement of the accuracy of physics data libraries used in particle transport are summarized. Results are reported about a large scale validation analysis of atomic parameters used by major Monte Carlo systems (Geant4, EGS, MCNP, Penelope etc.); their contribution to the accuracy of simulation observables is documented. The results of this study motivated the development of a new atomic data management software package, which optimizes the provision of state-of-the-art atomic parameters to physics models. The effect of atomic parameters on the simulation of radioactive decay is illustrated. Ideas and methods to deal with physics models applicable to different energy ranges in the production of data libraries, rather than at runtime, are discussed.

  5. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  6. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  7. Improving the Accuracy of Cloud Detection Using Machine Learning

    Science.gov (United States)

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.

  8. Improving treatment planning accuracy through multimodality imaging

    International Nuclear Information System (INIS)

    Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun

    1996-01-01

    the patient's initial fields and boost, respectively. Case illustrations are shown. Conclusions: We have successfully integrated multimodality imaging into our treatment-planning system, and its routine use is increasing. Multimodality imaging holds out the promise of improving treatment planning accuracy and, thus, takes maximum advantage of three dimensional treatment planning systems.

  9. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  10. Improving the accuracy of protein secondary structure prediction using structural alignment

    Directory of Open Access Journals (Sweden)

    Gallin Warren J

    2006-06-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has steadily improved over the past 30 years. Now many secondary structure prediction methods routinely achieve an accuracy (Q3 of about 75%. We believe this accuracy could be further improved by including structure (as opposed to sequence database comparisons as part of the prediction process. Indeed, given the large size of the Protein Data Bank (>35,000 sequences, the probability of a newly identified sequence having a structural homologue is actually quite high. Results We have developed a method that performs structure-based sequence alignments as part of the secondary structure prediction process. By mapping the structure of a known homologue (sequence ID >25% onto the query protein's sequence, it is possible to predict at least a portion of that query protein's secondary structure. By integrating this structural alignment approach with conventional (sequence-based secondary structure methods and then combining it with a "jury-of-experts" system to generate a consensus result, it is possible to attain very high prediction accuracy. Using a sequence-unique test set of 1644 proteins from EVA, this new method achieves an average Q3 score of 81.3%. Extensive testing indicates this is approximately 4–5% better than any other method currently available. Assessments using non sequence-unique test sets (typical of those used in proteome annotation or structural genomics indicate that this new method can achieve a Q3 score approaching 88%. Conclusion By using both sequence and structure databases and by exploiting the latest techniques in machine learning it is possible to routinely predict protein secondary structure with an accuracy well above 80%. A program and web server, called PROTEUS, that performs these secondary structure predictions is accessible at http://wishart.biology.ualberta.ca/proteus. For high throughput or batch sequence analyses, the PROTEUS programs

  11. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  12. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    Science.gov (United States)

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2017-08-01

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute setting Methods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented. Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care. Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted. What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type. What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting. What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has

  13. Hybrid Indoor-Based WLAN-WSN Localization Scheme for Improving Accuracy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Zahid Farid

    2016-01-01

    Full Text Available In indoor environments, WiFi (RSS based localization is sensitive to various indoor fading effects and noise during transmission, which are the main causes of localization errors that affect its accuracy. Keeping in view those fading effects, positioning systems based on a single technology are ineffective in performing accurate localization. For this reason, the trend is toward the use of hybrid positioning systems (combination of two or more wireless technologies in indoor/outdoor localization scenarios for getting better position accuracy. This paper presents a hybrid technique to implement indoor localization that adopts fingerprinting approaches in both WiFi and Wireless Sensor Networks (WSNs. This model exploits machine learning, in particular Artificial Natural Network (ANN techniques, for position calculation. The experimental results show that the proposed hybrid system improved the accuracy, reducing the average distance error to 1.05 m by using ANN. Applying Genetic Algorithm (GA based optimization technique did not incur any further improvement to the accuracy. Compared to the performance of GA optimization, the nonoptimized ANN performed better in terms of accuracy, precision, stability, and computational time. The above results show that the proposed hybrid technique is promising for achieving better accuracy in real-world positioning applications.

  14. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  15. Improvement of the accuracy of noise measurements by the two-amplifier correlation method.

    Science.gov (United States)

    Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P

    2013-10-01

    We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.

  16. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Science.gov (United States)

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  17. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    Science.gov (United States)

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  18. The GREAT3 challenge

    International Nuclear Information System (INIS)

    Miyatake, H; Mandelbaum, R; Rowe, B

    2014-01-01

    The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is an image analysis competition that aims to test algorithms to measure weak gravitational lensing from astronomical images. The challenge started in October 2013 and ends 30 April 2014. The challenge focuses on testing the impact on weak lensing measurements of realistically complex galaxy morphologies, realistic point spread function, and combination of multiple different exposures. It includes simulated ground- and space-based data. The details of the challenge are described in [1], and the challenge website and its leader board can be found at http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/, respectively

  19. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Directory of Open Access Journals (Sweden)

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  20. 100 ways to make good photos great tips & techniques for improving your digital photography

    CERN Document Server

    Cope, Peter

    2013-01-01

    A practical, accessible guide to turning your good photographs into great ones whether you are shooting on the latest digital SLR or a camera phone! Discover 100 simple and fun ways to improve your photographs both in-camera and through post-processing image manipulation. Every key photographic genre is covered, from perfect portraits and the great outdoors, to travel photos and shooting at night. Filled with inspirational examples of great photographs compared against the more average images, with easy to follow techniques for how you can achieve the same results.

  1. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    Directory of Open Access Journals (Sweden)

    HosseiniAliabadi S. J.

    2015-06-01

    Full Text Available Background: The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective: A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method: Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result: The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion: This system can be utilized in large scale environmental monitoring with a higher accuracy

  2. Achieving Climate Change Absolute Accuracy in Orbit

    Science.gov (United States)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; hide

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  3. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    International Nuclear Information System (INIS)

    Saghafi, Mahdi; Ghofrani, Mohammad Bagher; D’Auria, Francesco

    2016-01-01

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  4. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Saghafi, Mahdi [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); Ghofrani, Mohammad Bagher, E-mail: ghofrani@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); D’Auria, Francesco [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, San Piero a Grado, Pisa (Italy)

    2016-11-15

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  5. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  6. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  7. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    Science.gov (United States)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  8. New polymorphic tetranucleotide microsatellites improve scoring accuracy in the bottlenose dolphin Tursiops aduncus

    NARCIS (Netherlands)

    Nater, Alexander; Kopps, Anna M.; Kruetzen, Michael

    We isolated and characterized 19 novel tetranucleotide microsatellite markers in the Indo-Pacific bottlenose dolphin (Tursiops aduncus) in order to improve genotyping accuracy in applications like large-scale population-wide paternity and relatedness assessments. One hundred T. aduncus from Shark

  9. Solving the stability-accuracy-diversity dilemma of recommender systems

    Science.gov (United States)

    Hou, Lei; Liu, Kecheng; Liu, Jianguo; Zhang, Runtong

    2017-02-01

    Recommender systems are of great significance in predicting the potential interesting items based on the target user's historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top- n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of recommender systems.

  10. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Directory of Open Access Journals (Sweden)

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  11. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  12. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Science.gov (United States)

    Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R

    2010-09-29

    Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account

  13. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  14. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Science.gov (United States)

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  15. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  16. CADASTRAL POSITIONING ACCURACY IMPROVEMENT: A CASE STUDY IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    N. M. Hashim

    2016-09-01

    Full Text Available Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM. With the growth of spatial based technology especially Geographical Information System (GIS, DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI in cadastral database modernization.

  17. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    Science.gov (United States)

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  18. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    Science.gov (United States)

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  19. Does PACS improve diagnostic accuracy in chest radiograph interpretations in clinical practice?

    International Nuclear Information System (INIS)

    Hurlen, Petter; Borthne, Arne; Dahl, Fredrik A.; Østbye, Truls; Gulbrandsen, Pål

    2012-01-01

    Objectives: To assess the impact of a Picture Archiving and Communication System (PACS) on the diagnostic accuracy of the interpretation of chest radiology examinations in a “real life” radiology setting. Materials and methods: During a period before PACS was introduced to radiologists, when images were still interpreted on film and reported on paper, images and reports were also digitally stored in an image database. The same database was used after the PACS introduction. This provided a unique opportunity to conduct a blinded retrospective study, comparing sensitivity (the main outcome parameter) in the pre and post-PACS periods. We selected 56 digitally stored chest radiograph examinations that were originally read and reported on film, and 66 examinations that were read and reported on screen 2 years after the PACS introduction. Each examination was assigned a random number, and both reports and images were scored independently for pathological findings. The blinded retrospective score for the original reports were then compared with the score for the images (the gold standard). Results: Sensitivity was improved after the PACS introduction. When both certain and uncertain findings were included, this improvement was statistically significant. There were no other statistically significant changes. Conclusion: The result is consistent with prospective studies concluding that diagnostic accuracy is at least not reduced after PACS introduction. The sensitivity may even be improved.

  20. Improving the accuracy of acetabular cup implantation using a bulls-eye spirit level.

    Science.gov (United States)

    Macdonald, Duncan; Gupta, Sanjay; Ohly, Nicholas E; Patil, Sanjeev; Meek, R; Mohammed, Aslam

    2011-01-01

    Acetabular introducers have a built-in inclination of 45 degrees to the handle shaft. With patients in the lateral position, surgeons aim to align the introducer shaft vertical to the floor to implant the acetabulum at 45 degrees. We aimed to determine if a bulls-eye spirit level attached to an introducer improved the accuracy of implantation. A small circular bulls-eye spirit level was attached to the handle of an acetabular introducer. A saw bone hemipelvis was fixed to a horizontal, flat surface. A cement substitute was placed in the acetabulum and subjects were asked to implant a polyethylene cup, aiming to obtain an angle of inclination of 45 degrees. Two attempts were made with the spirit level masked and two with it unmasked. The distance of the air bubble from the spirit level's center was recorded by a single assessor. The angle of inclination of the acetabular component was then calculated. Subjects included both orthopedic consultants and trainees. Twenty-five subjects completed the study. Accuracy of acetabular implantation when using the unmasked spirit level improved significantly in all grades of surgeon. With the spirit level masked, 12 out of 50 attempts were accurate at 45 degrees inclination; 11 out of 50 attempts were "open," with greater than 45 degrees of inclination, and 27 were "closed," with less than 45 degrees. With the spirit level visible, all subjects achieved an inclination angle of exactly 45 degrees. A simple device attached to the handle of an acetabular introducer can significantly improve the accuracy of implantation of a cemented cup into a saw bone pelvis in the lateral position.

  1. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  2. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  3. Accuracy of the improved quasistatic space-time method checked with experiment

    International Nuclear Information System (INIS)

    Kugler, G.; Dastur, A.R.

    1976-10-01

    Recent experiments performed at the Savannah River Laboratory have made it possible to check the accuracy of numerical methods developed to simulate space-dependent neutron transients. The experiments were specifically designed to emphasize delayed neutron holdback. The CERBERUS code using the IQS (Improved Quasistatic) method has been developed to provide a practical yet accurate tool for spatial kinetics calculations of CANDU reactors. The code was tested on the Savannah River experiments and excellent agreement was obtained. (author)

  4. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections.

    Science.gov (United States)

    Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana

    2015-12-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. Accuracy of genomic selection in European maize elite breeding populations.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  6. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Directory of Open Access Journals (Sweden)

    Nathalie Stroeymeyt

    Full Text Available BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. METHODOLOGY/PRINCIPAL FINDINGS: Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. CONCLUSIONS/SIGNIFICANCE: These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade

  7. Improving the Accuracy of NMR Structures of Large Proteins Using Pseudocontact Shifts as Long-Range Restraints

    Energy Technology Data Exchange (ETDEWEB)

    Gaponenko, Vadim [National Cancer Institute, Structural Biophysics Laboratory (United States); Sarma, Siddhartha P. [Indian Institute of Science, Molecular Biophysics Unit (India); Altieri, Amanda S. [National Cancer Institute, Structural Biophysics Laboratory (United States); Horita, David A. [Wake Forest University School of Medicine, Department of Biochemistry (United States); Li, Jess; Byrd, R. Andrew [National Cancer Institute, Structural Biophysics Laboratory (United States)], E-mail: rabyrd@ncifcrf.gov

    2004-03-15

    We demonstrate improved accuracy in protein structure determination for large ({>=}30 kDa), deuterated proteins (e.g. STAT4{sub NT}) via the combination of pseudocontact shifts for amide and methyl protons with the available NOEs in methyl-protonated proteins. The improved accuracy is cross validated by Q-factors determined from residual dipolar couplings measured as a result of magnetic susceptibility alignment. The paramagnet is introduced via binding to thiol-reactive EDTA, and multiple sites can be serially engineered to obtain data from alternative orientations of the paramagnetic anisotropic susceptibility tensor. The technique is advantageous for systems where the target protein has strong interactions with known alignment media.

  8. Selecting fillers on emotional appearance improves lineup identification accuracy.

    Science.gov (United States)

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  10. Improving Odometric Accuracy for an Autonomous Electric Cart.

    Science.gov (United States)

    Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo

    2018-01-12

    In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  11. Accuracy improvement in the TDR-based localization of water leaks

    Directory of Open Access Journals (Sweden)

    Andrea Cataldo

    Full Text Available A time domain reflectometry (TDR-based system for the localization of water leaks has been recently developed by the authors. This system, which employs wire-like sensing elements to be installed along the underground pipes, has proven immune to the limitations that affect the traditional, acoustic leak-detection systems.Starting from the positive results obtained thus far, in this work, an improvement of this TDR-based system is proposed. More specifically, the possibility of employing a low-cost, water-absorbing sponge to be placed around the sensing element for enhancing the accuracy in the localization of the leak is addressed.To this purpose, laboratory experiments were carried out mimicking a water leakage condition, and two sensing elements (one embedded in a sponge and one without sponge were comparatively used to identify the position of the leak through TDR measurements. Results showed that, thanks to the water retention capability of the sponge (which maintains the leaked water more localized, the sensing element embedded in the sponge leads to a higher accuracy in the evaluation of the position of the leak. Keywords: Leak localization, TDR, Time domain reflectometry, Water leaks, Underground water pipes

  12. Improving the accuracy of energy baseline models for commercial buildings with occupancy data

    International Nuclear Information System (INIS)

    Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping

    2016-01-01

    Highlights: • We evaluated several baseline models predicting energy use in buildings. • Including occupancy data improved accuracy of baseline model prediction. • Occupancy is highly correlated with energy use in buildings. • This simple approach can be used in decision makings of energy retrofit projects. - Abstract: More than 80% of energy is consumed during operation phase of a building’s life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essential for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. The results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.

  13. Preoperative Measurement of Tibial Resection in Total Knee Arthroplasty Improves Accuracy of Postoperative Limb Alignment Restoration

    Directory of Open Access Journals (Sweden)

    Pei-Hui Wu

    2016-01-01

    Conclusions: Using conventional surgical instruments, preoperative measurement of resection thickness of the tibial plateau on radiographs could improve the accuracy of conventional surgical techniques.

  14. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  15. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    Science.gov (United States)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  16. Significant improvement of accuracy and precision in the determination of trace rare earths by fluorescence analysis

    International Nuclear Information System (INIS)

    Ozawa, L.; Hersh, H.N.

    1976-01-01

    Most of the rare earths in yttrium, gadolinium and lanthanum oxides emit characteristic fluorescent line spectra under irradiation with photons, electrons and x rays. The sensitivity and selectivity of the rare earth fluorescences are high enough to determine the trace amounts (0.01 to 100 ppM) of rare earths. The absolute fluorescent intensities of solids, however, are markedly affected by the synthesis procedure, level of contamination and crystal perfection, resulting in poor accuracy and low precision for the method (larger than 50 percent error). Special care in preparation of the samples is required to obtain good accuracy and precision. It is found that the accuracy and precision for the determination of trace (less than 10 ppM) rare earths by fluorescence analysis improved significantly, while still maintaining the sensitivity, when the determination is made by comparing the ratio of the fluorescent intensities of the trace rare earths to that of a deliberately added rare earth as reference. The variation in the absolute fluorescent intensity remains, but is compensated for by measuring the fluorescent line intensity ratio. Consequently, the determination of trace rare earths (with less than 3 percent error) is easily made by a photoluminescence technique in which the rare earths are excited directly by photons. Accuracy is still maintained when the absolute fluorescent intensity is reduced by 50 percent through contamination by Ni, Fe, Mn or Pb (about 100 ppM). Determination accuracy is also improved for fluorescence analysis by electron excitation and x-ray excitation. For some rare earths, however, accuracy by these techniques is reduced because indirect excitation mechanisms are involved. The excitation mechanisms and the interferences between rare earths are also reported

  17. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  18. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  19. Improving Odometric Accuracy for an Autonomous Electric Cart

    Directory of Open Access Journals (Sweden)

    Jonay Toledo

    2018-01-01

    Full Text Available In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  20. Improvement of Measurement Accuracy of Coolant Flow in a Test Loop

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jintae; Kim, Jong-Bum; Joung, Chang-Young; Ahn, Sung-Ho; Heo, Sung-Ho; Jang, Seoyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this study, to improve the measurement accuracy of coolant flow in a coolant flow simulator, elimination of external noise are enhanced by adding ground pattern in the control panel and earth around signal cables. In addition, a heating unit is added to strengthen the fluctuation signal by heating the coolant because the source of signals are heat energy. Experimental results using the improved system shows good agreement with the reference flow rate. The measurement error is reduced dramatically compared with the previous measurement accuracy and it will help to analyze the performance of nuclear fuels. For further works, out of pile test will be carried out by fabricating a test rig mockup and inspect the feasibility of the developed system. To verify the performance of a newly developed nuclear fuel, irradiation test needs to be carried out in the research reactor and measure the irradiation behavior such as fuel temperature, fission gas release, neutron dose, coolant temperature, and coolant flow rate. In particular, the heat generation rate of nuclear fuels can be measured indirectly by measuring temperature variation of coolant which passes by the fuel rod and its flow rate. However, it is very difficult to measure the flow rate of coolant at the fuel rod owing to the narrow gap between components of the test rig. In nuclear fields, noise analysis using thermocouples in the test rig has been applied to measure the flow velocity of coolant which circulates through the test loop.

  1. Improving calibration accuracy in gel dosimetry

    International Nuclear Information System (INIS)

    Oldham, M.; McJury, M.; Webb, S.; Baustert, I.B.; Leach, M.O.

    1998-01-01

    A new method of calibrating gel dosimeters (applicable to both Fricke and polyacrylamide gels) is presented which has intrinsically higher accuracy than current methods, and requires less gel. Two test-tubes of gel (inner diameter 2.5 cm, length 20 cm) are irradiated separately with a 10x10cm 2 field end-on in a water bath, such that the characteristic depth-dose curve is recorded in the gel. The calibration is then determined by fitting the depth-dose measured in water, against the measured change in relaxivity with depth in the gel. Increased accuracy is achieved in this simple depth-dose geometry by averaging the relaxivity at each depth. A large number of calibration data points, each with relatively high accuracy, are obtained. Calibration data over the full range of dose (1.6-10 Gy) is obtained by irradiating one test-tube to 10 Gy at dose maximum (D max ), and the other to 4.5 Gy at D max . The new calibration method is compared with a 'standard method' where five identical test-tubes of gel were irradiated to different known doses between 2 and 10 Gy. The percentage uncertainties in the slope and intercept of the calibration fit are found to be lower with the new method by a factor of about 4 and 10 respectively, when compared with the standard method and with published values. The gel was found to respond linearly within the error bars up to doses of 7 Gy, with a slope of 0.233±0.001 s -1 Gy -1 and an intercept of 1.106±0.005 Gy. For higher doses, nonlinear behaviour was observed. (author)

  2. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  3. Improving Accuracy of Dempster-Shafer Theory Based Anomaly Detection Systems

    Directory of Open Access Journals (Sweden)

    Ling Zou

    2014-07-01

    Full Text Available While the Dempster-Shafer theory of evidence has been widely used in anomaly detection, there are some issues with them. Dempster-Shafer theory of evidence trusts evidences equally which does not hold in distributed-sensor ADS. Moreover, evidences are dependent with each other sometimes which will lead to false alert. We propose improving by incorporating two algorithms. Features selection algorithm employs Gaussian Graphical Models to discover correlation between some candidate features. A group of suitable ADS were selected to detect and detection result were send to the fusion engine. Information gain is applied to set weight for every feature on Weights estimated algorithm. A weighted Dempster-Shafer theory of evidence combined the detection results to achieve a better accuracy. We evaluate our detection prototype through a set of experiments that were conducted with standard benchmark Wisconsin Breast Cancer Dataset and real Internet traffic. Evaluations on the Wisconsin Breast Cancer Dataset show that our prototype can find the correlation in nine features and improve the detection rate without affecting the false positive rate. Evaluations on Internet traffic show that Weights estimated algorithm can improve the detection performance significantly.

  4. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  5. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  6. Improving the Stability and Accuracy of Power Hardware-in-the-Loop Simulation Using Virtual Impedance Method

    Directory of Open Access Journals (Sweden)

    Xiaoming Zha

    2016-11-01

    Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.

  7. Overlay accuracy fundamentals

    Science.gov (United States)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  8. Improving contour accuracy and strength of reactive air brazed (RAB) ceramic/metal joints by controlling interface microstructure

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chichi; Kuhn, Bernd; Brandenberg, Joerg; Beck, Tilmann; Singheiser, Lorenz [Forschungszentrum Juelich GmbH, Institute for Energy and Climate Research (IEK), Microstructure and Properties of Materials (IEK-2), 52425 Juelich (Germany); Bobzin, Kirsten; Bagcivan, Nazlim; Kopp, Nils [Surface Engineering Institute (IOT), RWTH Aachen University, Kackertstr. 15, 52072 Aachen (Germany)

    2012-06-15

    The development of high-temperature electrochemical devices such as solid oxide fuel cells, oxygen, and hydrogen separators and gas reformers poses a great challenge in brazing technology of metal/ceramic joints. To maintain the integrity of such equipment, the resulting seals have to be stable and hermetic during continuous and cyclic high temperature operation. As a solution for joining metal and ceramic materials, reactive air brazing has gained increasing interest in recent years. This paper compares joints brazed by different filler alloys: pure Ag, AgCu, and AgAl in three different aspects: contour accuracy, room temperature delamination resistance, and corresponding microstructures of the as-brazed and fractured brazed joints. Discussion focuses on fracture mechanism and associated delamination resistance. AgAl brazed joints exhibit the most promising mechanical properties and contour accuracy. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. Investigation into the accuracy of a proposed laser diode based multilateration machine tool calibration system

    International Nuclear Information System (INIS)

    Fletcher, S; Longstaff, A P; Myers, A

    2005-01-01

    Geometric and thermal calibration of CNC machine tools is required in modern machine shops with volumetric accuracy assessment becoming the standard machine tool qualification in many industries. Laser interferometry is a popular method of measuring the errors but this, and other alternatives, tend to be expensive, time consuming or both. This paper investigates the feasibility of using a laser diode based system that capitalises on the low cost nature of the diode to provide multiple laser sources for fast error measurement using multilateration. Laser diode module technology enables improved wavelength stability and spectral linewidth which are important factors for laser interferometry. With more than three laser sources, the set-up process can be greatly simplified while providing flexibility in the location of the laser sources improving the accuracy of the system

  10. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  11. Transthoracic CT-guided biopsy with multiplanar reconstruction image improves diagnostic accuracy of solitary pulmonary nodules

    International Nuclear Information System (INIS)

    Ohno, Yoshiharu; Hatabu, Hiroto; Takenaka, Daisuke; Imai, Masatake; Ohbayashi, Chiho; Sugimura, Kazuro

    2004-01-01

    Objective: To evaluate the utility of multiplanar reconstruction (MPR) image for CT-guided biopsy and determine factors of influencing diagnostic accuracy and the pneumothorax rate. Materials and methods: 390 patients with 396 pulmonary nodules underwent transthoracic CT-guided aspiration biopsy (TNAB) and transthoracic CT-guided cutting needle core biopsy (TCNB) as follows: 250 solitary pulmonary nodules (SPNs) underwent conventional CT-guided biopsy (conventional method), 81 underwent CT-fluoroscopic biopsy (CT-fluoroscopic method) and 65 underwent conventional CT-guided biopsy in combination with MPR image (MPR method). Success rate, overall diagnostic accuracy, pneumothorax rate and total procedure time were compared in each method. Factors affecting diagnostic accuracy and pneumothorax rate of CT-guided biopsy were statistically evaluated. Results: Success rates (TNAB: 100.0%, TCNB: 100.0%) and overall diagnostic accuracies (TNAB: 96.9%, TCNB: 97.0%) of MPR were significantly higher than those using the conventional method (TNAB: 87.6 and 82.4%, TCNB: 86.3 and 81.3%) (P<0.05). Diagnostic accuracy were influenced by biopsy method, lesion size, and needle path length (P<0.05). Pneumothorax rate was influenced by pathological diagnostic method, lesion size, number of punctures and FEV1.0% (P<0.05). Conclusion: The use of MPR for CT-guided lung biopsy is useful for improving diagnostic accuracy with no significant increase in pneumothorax rate or total procedure time

  12. Analysis of prostate cancer localization toward improved diagnostic accuracy of transperineal prostate biopsy

    Directory of Open Access Journals (Sweden)

    Yoshiro Sakamoto

    2014-09-01

    Conclusions: The concordance of prostate cancer between prostatectomy specimens and biopsies is comparatively favorable. According to our study, the diagnostic accuracy of transperineal prostate biopsy can be improved in our institute by including the anterior portion of the Apex-Mid and Mid regions in the 12-core biopsy or 16-core biopsy, such that a 4-core biopsy of the anterior portion is included.

  13. Analysis of the Accuracy of Beidou Combined Orbit Determination Enhanced by LEO and ISL

    Directory of Open Access Journals (Sweden)

    FENG Laiping

    2017-05-01

    Full Text Available In order to improve the precision of BeiDou orbit determination under the conditions of regional ground monitoring station and make good use of increasingly rich on-board data and upcoming ISL technology, a method of BeiDou precision orbit determination is proposed which combines the use of ground monitoring stations data, low earth orbit satellite(LEOs data and Inter-Satellite Link(ISL data. The effects of assisting data of LEOs and ISL on the precision orbit determination of navigation satellite are discussed. Simulation analysis is carried out mainly from the number of LEOs, orbit slot configuration and ISL. The results show that the orbit precision of BeiDou will greatly improve about 73% with a small number of LEOs, while improvement of clock bias is not remarkable; the uniform orbit slot configuration of the same number of LEOs has a modest effect on the accuracy of combined orbit determination; compared with LEOs, the increase of ISL will significantly improve the accuracy of orbit determination with a higher efficiency.

  14. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  15. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  16. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  17. Improving diagnostic accuracy using agent-based distributed data mining system.

    Science.gov (United States)

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  18. Improvement of Ultrasonic Distance Measuring System

    Directory of Open Access Journals (Sweden)

    Jiang Yu

    2018-01-01

    Full Text Available This paper mainly introduces a kind of ultrasonic distance measuring system with AT89C51 single chip as the core component. The paper expounds the principle of ultrasonic sensor and ultrasonic ranging, hardware circuit and software program, and the results of experiment and analysis.The hardware circuit based on SCM, the software design adopts the advanced microcontroller programming language.The amplitude of the received signal and the time of ultrasonic propagation are regulated by closed loop control. [1,2]The double closed loop control technology for amplitude and time improves the measuring accuracy of the instrument. The experimental results show that greatly improves the measurement accuracy of the system.

  19. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    DEFF Research Database (Denmark)

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E

    2014-01-01

    OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  20. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    Science.gov (United States)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with Fi

  1. Effectiveness of blood pressure educational and evaluation program for the improvement of measurement accuracy among nurses.

    Science.gov (United States)

    Rabbia, Franco; Testa, Elisa; Rabbia, Silvia; Praticò, Santina; Colasanto, Claudia; Montersino, Federica; Berra, Elena; Covella, Michele; Fulcheri, Chiara; Di Monaco, Silvia; Buffolo, Fabrizio; Totaro, Silvia; Veglio, Franco

    2013-06-01

    To assess the procedure for measuring blood pressure (BP) among hospital nurses and to assess if a training program would improve technique and accuracy. 160 nurses from Molinette Hospital were included in the study. The program was based upon theoretical and practical lessons. It was one day long and it was held by trained nurses and physicians who have practice in the Hypertension Unit. An evaluation of nurses' measuring technique and accuracy was performed before and after the program, by using a 9-item checklist. Moreover we calculated the differences between measured and effective BP values before and after the training program. At baseline evaluation, we observed inadequate performance on some points of clinical BP measurement technique, specifically: only 10% of nurses inspected the arm diameter before placing the cuff, 4% measured BP in both arms, 80% placed the head of the stethoscope under the cuff, 43% did not remove all clothing that covered the location of cuff placement, did not have the patient seat comfortably with his legs uncrossed and with his back and arms supported. After the training we found a significant improvement in the technique for all items. We didn't observe any significant difference of measurement knowledge between nurses working in different settings such as medical or surgical departments. Periodical education in BP measurement may be required, and this may significantly improve the technique and consequently the accuracy.

  2. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks.

    Science.gov (United States)

    Hawkins, S C; Osborne, A; Schofield, S J; Pournaras, D J; Chester, J F

    2012-01-01

    Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.

  3. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    Science.gov (United States)

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  4. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  5. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  6. Summary of Great East Japan Earthquake response at Onagawa Nuclear Power Station and further safety improvement measures

    International Nuclear Information System (INIS)

    Sato, Toru

    2013-01-01

    A large earthquake occurred on March 11, 2011 and tsunami was generated following it. The East Japan suffered serious damage by the earthquake and tsunami. This is called the Great East Japan Earthquake. Onagawa Nuclear Power Station (NPS) is located closest to the epicenter of Great East Japan Earthquake. We experienced intense shake by the earthquake and some flooding from the tsunami, however, we have succeeded safely cold shutdown of the reactors. In this paper, we introduce summary of Great East Japan Earthquake response a Onagawa NPS and safety improvement measures which are based on both experience of Onagawa NPS and lesson from Fukushima Daiichi NPS accident. (author)

  7. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran A Monaghan

    Full Text Available Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data

  8. An index with improved diagnostic accuracy for the diagnosis of Crohn's disease derived from the Lennard-Jones criteria.

    Science.gov (United States)

    Reinisch, S; Schweiger, K; Pablik, E; Collet-Fenetrier, B; Peyrin-Biroulet, L; Alfaro, I; Panés, J; Moayyedi, P; Reinisch, W

    2016-09-01

    The Lennard-Jones criteria are considered the gold standard for diagnosing Crohn's disease (CD) and include the items granuloma, macroscopic discontinuity, transmural inflammation, fibrosis, lymphoid aggregates and discontinuous inflammation on histology. The criteria have never been subjected to a formal validation process. To develop a validated and improved diagnostic index based on the items of Lennard-Jones criteria. Included were 328 adult patients with long-standing CD (median disease duration 10 years) from three centres and classified as 'established', 'probable' or 'non-CD' by Lennard-Jones criteria at time of diagnosis. Controls were patients with ulcerative colitis (n = 170). The performance of each of the six diagnostic items of Lennard-Jones criteria was modelled by logistic regression and a new index based on stepwise backward selection and cut-offs was developed. The diagnostic value of the new index was analysed by comparing sensitivity, specificity and accuracy vs. Lennard-Jones criteria. By Lennard-Jones criteria 49% (n = 162) of CD patients would have been diagnosed as 'non-CD' at time of diagnosis (sensitivity/specificity/accuracy, 'established' CD: 0.34/0.99/0.67; 'probable' CD: 0.51/0.95/0.73). A new index was derived from granuloma, fibrosis, transmural inflammation and macroscopic discontinuity, but excluded lymphoid aggregates and discontinuous inflammation on histology. Our index provided improved diagnostic accuracy for 'established' and 'probable' CD (sensitivity/specificity/accuracy, 'established' CD: 0.45/1/0.72; 'probable' CD: 0.8/0.85/0.82), including the subgroup isolated colonic CD ('probable' CD, new index: 0.73/0.85/0.79; Lennard-Jones criteria: 0.43/0.95/0.69). We developed an index based on items of Lennard-Jones criteria providing improved diagnostic accuracy for the differential diagnosis between CD and UC. © 2016 John Wiley & Sons Ltd.

  9. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    Science.gov (United States)

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  10. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  11. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  12. PROPOSAL FOR IMPROVEMENT OF BUINESS CONTINUITY PLAN (BCP) BASED ON THE LESSONS OF THE GREAT EAST JAPAN EARTHQUAKE

    Science.gov (United States)

    Maruya, Hiroaki

    For most Japanese companies and organizations, the enormous damage of the Great East Japan Earthquake was more than expected. In addition to great tsunami and earthquake motion, the lack of electricity and fuel disturbed to business activities seriously, and they should be considered important constraint factors in future earthquakes. Furthermore, disruption of supply chains also led considerable decline of production in many industries across Japan and foreign countries. Therefore it becomes urgent need for Japanese government and industries to utilize the lessons of the Great Earthquake and execute effective countermeasures, considering great earthquakes such as Tonankai & Nankai earthquakes and Tokyo Inland Earthquakes. Obviously most basic step is improving earthquake-resistant ability of buildings and facilities. In addition the spread of BCP and BCM to enterprises and organizations is indispensable. Based on the lessons, the BCM should include the point of view of the supply chain management more clearly, and emphasize "substitute strategy" more explicitly because a company should survive even if it completely loses its present production base. The central and local governments are requested, in addition to develop their own BCP, to improve related systematic conditions for BCM of the private sectors.

  13. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy

    Science.gov (United States)

    Peña, M. A.; Liao, R.; Brenning, A.

    2017-06-01

    This study assesses the potential of spectrotemporal indices derived from satellite image time series (SITS) to improve the classification accuracy of fruit-tree crops. Six major fruit-tree crop types in the Aconcagua Valley, Chile, were classified by applying various linear discriminant analysis (LDA) techniques on a Landsat-8 time series of nine images corresponding to the 2014-15 growing season. As features we not only used the complete spectral resolution of the SITS, but also all possible normalized difference indices (NDIs) that can be constructed from any two bands of the time series, a novel approach to derive features from SITS. Due to the high dimensionality of this "enhanced" feature set we used the lasso and ridge penalized variants of LDA (PLDA). Although classification accuracies yielded by the standard LDA applied on the full-band SITS were good (misclassification error rate, MER = 0.13), they were further improved by 23% (MER = 0.10) with ridge PLDA using the enhanced feature set. The most important bands to discriminate the crops of interest were mainly concentrated on the first two image dates of the time series, corresponding to the crops' greenup stage. Despite the high predictor weights provided by the red and near infrared bands, typically used to construct greenness spectral indices, other spectral regions were also found important for the discrimination, such as the shortwave infrared band at 2.11-2.19 μm, sensitive to foliar water changes. These findings support the usefulness of spectrotemporal indices in the context of SITS-based crop type classifications, which until now have been mainly constructed by the arithmetic combination of two bands of the same image date in order to derive greenness temporal profiles like those from the normalized difference vegetation index.

  14. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  15. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Science.gov (United States)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  16. Improvement in the Accuracy of Flux Measurement of Radio Sources by Exploiting an Arithmetic Pattern in Photon Bunching Noise

    Energy Technology Data Exchange (ETDEWEB)

    Lieu, Richard [Department of Physics, University of Alabama, Huntsville, AL 35899 (United States)

    2017-07-20

    A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.

  17. Improvement in the accuracy of flux measurement of radio sources by exploiting an arithmetic pattern in photon bunching noise

    Science.gov (United States)

    Lieu, Richard

    2018-01-01

    A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.

  18. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  19. Analysis of Correlation in MEMS Gyroscope Array and its Influence on Accuracy Improvement for the Combined Angular Rate Signal

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-01-01

    Full Text Available Obtaining a correlation factor is a prerequisite for fusing multiple outputs of a mircoelectromechanical system (MEMS gyroscope array and evaluating accuracy improvement. In this paper, a mathematical statistics method is established to analyze and obtain the practical correlation factor of a MEMS gyroscope array, which solves the problem of determining the Kalman filter (KF covariance matrix Q and fusing the multiple gyroscope signals. The working principle and mathematical model of the sensor array fusion is briefly described, and then an optimal estimate of input rate signal is achieved by using of a steady-state KF gain in an off-line estimation approach. Both theoretical analysis and simulation show that the negative correlation factor has a favorable influence on accuracy improvement. Additionally, a four-gyro array system composed of four discrete individual gyroscopes was developed to test the correlation factor and its influence on KF accuracy improvement. The result showed that correlation factors have both positive and negative values; in particular, there exist differences for correlation factor between the different units in the array. The test results also indicated that the Angular Random Walk (ARW of 1.57°/h0.5 and bias drift of 224.2°/h for a single gyroscope were reduced to 0.33°/h0.5 and 47.8°/h with some negative correlation factors existing in the gyroscope array, making a noise reduction factor of about 4.7, which is higher than that of a uncorrelated four-gyro array. The overall accuracy of the combined angular rate signal can be further improved if the negative correlation factors in the gyroscope array become larger.

  20. New technology in dietary assessment: a review of digital methods in improving food record accuracy.

    Science.gov (United States)

    Stumbo, Phyllis J

    2013-02-01

    Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.

  1. Mapping invasive Phragmites australis in the coastal Great Lakes with ALOS PALSAR satellite imagery for decision support

    Science.gov (United States)

    Bourgeau-Chavez, Laura L.; Kowalski, Kurt P.; Carlson Mazur, Martha L.; Scarbrough, Kirk A.; Powell, Richard B.; Brooks, Colin N.; Huberty, Brian; Jenkins, Liza K.; Banda, Elizabeth C.; Galbraith, David M.; Laubach, Zachary M.; Riordan, Kevin

    2013-01-01

    The invasive variety of Phragmites australis (common reed) forms dense stands that can cause negative impacts on coastal Great Lakes wetlands including habitat degradation and reduced biological diversity. Early treatment is key to controlling Phragmites, therefore a map of the current distribution is needed. ALOS PALSAR imagery was used to produce the first basin-wide distribution map showing the extent of large, dense invasive Phragmites-dominated habitats in wetlands and other coastal ecosystems along the U.S. shore of the Great Lakes. PALSAR is a satellite imaging radar sensor that is sensitive to differences in plant biomass and inundation patterns, allowing for the detection and delineation of these tall (up to 5 m), high density, high biomass invasive Phragmites stands. Classification was based on multi-season ALOS PALSAR L-band (23 cm wavelength) HH and HV polarization data. Seasonal (spring, summer, and fall) datasets were used to improve discrimination of Phragmites by taking advantage of phenological changes in vegetation and inundation patterns over the seasons. Extensive field collections of training and randomly selected validation data were conducted in 2010–2011 to aid in mapping and for accuracy assessments. Overall basin-wide map accuracy was 87%, with 86% producer's accuracy and 43% user's accuracy for invasive Phragmites. The invasive Phragmites maps are being used to identify major environmental drivers of this invader's distribution, to assess areas vulnerable to new invasion, and to provide information to regional stakeholders through a decision support tool.

  2. Accuracy Improvement of Discharge Measurement with Modification of Distance Made Good Heading

    Directory of Open Access Journals (Sweden)

    Jongkook Lee

    2016-01-01

    Full Text Available Remote control boats equipped with an Acoustic Doppler Current Profiler (ADCP are widely accepted and have been welcomed by many hydrologists for water discharge, velocity profile, and bathymetry measurements. The advantages of this technique include high productivity, fast measurements, operator safety, and high accuracy. However, there are concerns about controlling and operating a remote boat to achieve measurement goals, especially during extreme events such as floods. When performing river discharge measurements, the main error source stems from the boat path. Due to the rapid flow in a flood condition, the boat path is not regular and this can cause errors in discharge measurements. Therefore, improvement of discharge measurements requires modification of boat path. As a result, the measurement errors in flood flow conditions are 12.3–21.8% before the modification of boat path, but 1.2–3.7% after the DMG modification of boat path. And it is considered that the modified discharges are very close to the observed discharge in the flood flow conditions. In this study, through the distance made good (DMG modification of the boat path, a comprehensive discharge measurement with high accuracy can be achieved.

  3. Content in Context Improves Deception Detection Accuracy

    Science.gov (United States)

    Blair, J. Pete; Levine, Timothy R.; Shaw, Allison S.

    2010-01-01

    Past research has shown that people are only slightly better than chance at distinguishing truths from lies. Higher accuracy rates, however, are possible when contextual knowledge is used to judge the veracity of situated message content. The utility of content in context was shown in a series of experiments with students (N = 26, 45, 51, 25, 127)…

  4. Can Automatic Classification Help to Increase Accuracy in Data Collection?

    Directory of Open Access Journals (Sweden)

    Frederique Lang

    2016-09-01

    classification achieved by this means is not completely accurate, the amount of manual coding needed can be greatly reduced by using classification algorithms. This can be of great help when the dataset is big. With the help of accuracy, recall, and coverage measures, it is possible to have an estimation of the error involved in this classification, which could open the possibility of incorporating the use of these algorithms in software specifically designed for data cleaning and classification. Originality/value: We analyzed the performance of seven algorithms and whether combinations of these algorithms improve accuracy in data collection. Use of these algorithms could reduce time needed for manual data cleaning.

  5. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees.

    Science.gov (United States)

    Keller, Alexander; Förster, Frank; Müller, Tobias; Dandekar, Thomas; Schultz, Jörg; Wolf, Matthias

    2010-01-15

    In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  6. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    International Nuclear Information System (INIS)

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig; Riedl, Christopher C.; Ishill, Nicole M.; Moskowitz, Chaya S.

    2010-01-01

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  7. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Riedl, Christopher C. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Medical University of Vienna, Department of Radiology, Vienna (Austria); Ishill, Nicole M.; Moskowitz, Chaya S. [Memorial Sloan-Kettering Cancer Center, Epidemiology and Biostatistics, New York, NY (United States)

    2010-04-15

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  8. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  9. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Science.gov (United States)

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  10. Improvement of Dimensional Accuracy of 3-D Printed Parts using an Additive/Subtractive Based Hybrid Prototyping Approach

    Science.gov (United States)

    Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.

    2017-11-01

    At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).

  11. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    Energy Technology Data Exchange (ETDEWEB)

    Garmay, Andrey V., E-mail: andrew-garmay@yandex.ru; Oskolok, Kirill V.

    2017-04-15

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  12. An Investigation to Improve Classifier Accuracy for Myo Collected Data

    Science.gov (United States)

    2017-02-01

    Bad Samples Effect on Classification Accuracy 7 5.1 Naïve Bayes (NB) Classifier Accuracy 7 5.2 Logistic Model Tree (LMT) 10 5.3 K-Nearest Neighbor...gesture, pitch feature, user 06. All samples exhibit reversed movement...20 Fig. A-2 Come gesture, pitch feature, user 14. All samples exhibit reversed movement

  13. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    Science.gov (United States)

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  14. Accuracy of an improved device for remote measuring of tree-trunk diameters

    International Nuclear Information System (INIS)

    Matsushita, T.; Kato, S.; Komiyama, A.

    2000-01-01

    For measuring the diameters of tree trunks from a distant position, a recent device using a laser beam was developed by Kantou. We improved this device to serve our own practical purposes. The improved device consists of a 1-m-long metal caliper and a small telescope sliding smoothly onto it. Using the cross hairs in the scope, one can measure both edges of an object on the caliper and calculate its length. The laser beam is used just for guiding the telescopic sights to the correct positions on the object. In this study, the accuracy of this new device was examined by measuring objects of differing lengths, the distance from the object, and the angle of elevation to the object. Since each result of the experiment predicted absolute errors of measurement of less than 3 mm, this new device will be suitable for the measurement of trunk diameters in the field

  15. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort.

    Science.gov (United States)

    Mah, K; Danjoux, C E; Manship, S; Makhani, N; Cardoso, M; Sixel, K E

    1998-07-15

    To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of > or =5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall, it has improved staff efficiency and resource utilization.

  16. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort

    International Nuclear Information System (INIS)

    Mah, Katherine; Danjoux, Cyril E.; Manship, Sharan; Makhani, Nadiya; Cardoso, Marlene; Sixel, Katharina E.

    1998-01-01

    Purpose: To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. Methods and Materials: A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Results: Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was <10 min in all cases. In the absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of ≥5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Conclusions: Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall

  17. Improved accuracy in estimation of left ventricular function parameters from QGS software with Tc-99m tetrofosmin gated-SPECT. A multivariate analysis

    International Nuclear Information System (INIS)

    Okizaki, Atsutaka; Shuke, Noriyuki; Sato, Junichi; Ishikawa, Yukio; Yamamoto, Wakako; Kikuchi, Kenjiro; Aburano, Tamio

    2003-01-01

    The purpose of this study was to verify whether the accuracy of left ventricular parameters related to left ventricular function from gated-SPECT improved or not, using multivariate analysis. Ninety-six patients with cardiovascular diseases were studied. Gated-SPECT with the quantitative gated SPECT (QGS) software and left ventriculography (LVG) were performed to obtain left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV). Then, multivariate analyses were performed to determine empirical formulas for predicting these parameters. The calculated values of left ventricular parameters were compared with those obtained directly from the QGS software and LVG. Multivariate analyses were able to improve accuracy in estimation of LVEF, EDV and ESV. Statistically significant improvement was seen in LVEF (from r=0.6965 to r=0.8093, p<0.05). Although not statistically significant, improvements in correlation coefficients were seen in EDV (from r=0.7199 to r=0.7595, p=0.2750) and ESV (from r=0.5694 to r=0.5871, p=0.4281). The empirical equations with multivariate analysis improved the accuracy in estimating LVEF from gated-SPECT with the QGS software. (author)

  18. Improvement of diagnostic accuracy, and clinical evaluation of computed tomography and ultrasonography for deep seated cancers

    International Nuclear Information System (INIS)

    Arimizu, Noboru

    1980-01-01

    Cancers of the liver, gallbladder, and pancreas which were difficult to be detected at an early stage were studied. Diagnostic accuracy of CT and ultrasonography for vesectable small cancers was investigated by the project team and coworkers. Only a few cases of hepatocellular carcinoma, cancer of the common bile duct, and cancer of the pancreas head, with the maximum diameter of 1 - 2 cm, were able to be diagnosed by CT. There seemed to be more false negative cases with small cancers of that size. The limit of the size which could be detected by CT was thought to be 2 - 3 cm. Similar results were obtained by ultrasonography. Cancer of the pancreas body with the maximum diameter of less than 3.5 cm could not be detected by both CT and ultrasonography. Diagnostic accuracy of CT for liver cancer was improved by selective intraarterial injection of contrast medium. Improvement of the quality of ultrasonograms was achieved through this study. Merits and demerits of CT and ultrasonography were also compared. (Tsunoda, M.)

  19. Improvement of diagnostic accuracy, and clinical evaluation of computed tomography and ultrasonography for deep seated cancers

    Energy Technology Data Exchange (ETDEWEB)

    Arimizu, N [Chiba Univ. (Japan). School of Medicine

    1980-06-01

    Cancers of the liver, gallbladder, and pancreas which were difficult to be detected at an early stage were studied. Diagnostic accuracy of CT and ultrasonography for resectable small cancers was investigated by the project team and co-workers. Only a few cases of hepatocellular carcinoma, cancer of the common bile duct, and cancer of the pancreas head, with the maximum diameter of 1 - 2 cm, were able to be diagnosed by CT. There seemed to be more false negative cases with small cancers of that size. The limit of the size which could be detected by CT was thought to be 2 - 3 cm. Similar results were obtained by ultrasonography. Cancer of the pancreas body with the maximum diameter of less than 3.5 cm could not be detected by both CT and ultrasonography. Diagnostic accuracy of CT for liver cancer was improved by selective intraarterial injection of contrast medium. Improvement of the quality of ultrasonograms was achieved through this study. Merits and demerits of CT and ultrasonography were also compared.

  20. Climate Change Accuracy: Requirements and Economic Value

    Science.gov (United States)

    Wielicki, B. A.; Cooke, R.; Mlynczak, M. G.; Lukashin, C.; Thome, K. J.; Baize, R. R.

    2014-12-01

    Higher than normal accuracy is required to rigorously observe decadal climate change. But what level is needed? How can this be quantified? This presentation will summarize a new more rigorous and quantitative approach to determining the required accuracy for climate change observations (Wielicki et al., 2013, BAMS). Most current global satellite observations cannot meet this accuracy level. A proposed new satellite mission to resolve this challenge is CLARREO (Climate Absolute Radiance and Refractivity Observatory). CLARREO is designed to achieve advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra (Wielicki et al., Oct. 2013 BAMS). The CLARREO spectrometers are designed to serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and to greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar passive satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, SPOT, etc). Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A study has been carried out to quantify the economic benefits of such an advance as part of a rigorous and complete climate observing system. The study concludes that the economic value is $12 Trillion U.S. dollars in Net Present Value for a nominal discount rate of 3% (Cooke et al. 2013, J. Env. Sys. Dec.). A brief summary of these two studies and their implications for the future of climate science will be presented.

  1. Modeling detection probability to improve marsh bird surveys in southern Canada and the Great Lakes states

    Directory of Open Access Journals (Sweden)

    Douglas C. Tozer

    2016-12-01

    Full Text Available Marsh birds are notoriously elusive, with variation in detection probability across species, regions, seasons, and different times of day and weather. Therefore, it is important to develop regional field survey protocols that maximize detections, but that also produce data for estimating and analytically adjusting for remaining differences in detections. We aimed to improve regional field survey protocols by estimating detection probability of eight elusive marsh bird species throughout two regions that have ongoing marsh bird monitoring programs: the southern Canadian Prairies (Prairie region and the southern portion of the Great Lakes basin and parts of southern Québec (Great Lakes-St. Lawrence region. We accomplished our goal using generalized binomial N-mixture models and data from ~22,300 marsh bird surveys conducted between 2008 and 2014 by Bird Studies Canada's Prairie, Great Lakes, and Québec Marsh Monitoring Programs. Across all species, on average, detection probability was highest in the Great Lakes-St. Lawrence region from the beginning of May until mid-June, and then fell throughout the remainder of the season until the end of June; was lowest in the Prairie region in mid-May and then increased throughout the remainder of the season until the end of June; was highest during darkness compared with light; and did not vary significantly according to temperature (range: 0-30°C, cloud cover (0%-100%, or wind (0-20 kph, or during morning versus evening. We used our results to formulate improved marsh bird survey protocols for each region. Our analysis and recommendations are useful and contribute to conservation of wetland birds at various scales from local single-species studies to the continental North American Marsh Bird Monitoring Program.

  2. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    Science.gov (United States)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  3. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees

    Directory of Open Access Journals (Sweden)

    Dandekar Thomas

    2010-01-01

    Full Text Available Abstract Background In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. Results This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Conclusions Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. Reviewers This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. Open peer review Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  4. Accounting for health and health care: approaches to measuring the sources and costs of their improvement

    National Research Council Canada - National Science Library

    Panel to Advance a Research Program on the Design of National Health Accounts

    .... Accounting for Health and Health Care addresses both these issues. The government agencies responsible for measuring unit prices for medical services have taken steps in recent years that have greatly improved the accuracy of those measures. Nonetheless, this book has several recommendations aimed at further improving the price indices.

  5. Stratified computed tomography findings improve diagnostic accuracy for appendicitis

    Science.gov (United States)

    Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June

    2014-01-01

    AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P appendicitis with equivocal CT findings. PMID:25320531

  6. Cost-effective improvements of a rotating platform by integration of a high-accuracy inclinometer and encoders for attitude evaluation

    International Nuclear Information System (INIS)

    Wen, Chenyang; He, Shengyang; Hu, Peida; Bu, Changgen

    2017-01-01

    Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy. (paper)

  7. Quality systems for radiotherapy: Impact by a central authority for improved accuracy, safety and accident prevention

    International Nuclear Information System (INIS)

    Jaervinen, H.; Sipilae, P.; Parkkinen, R.; Kosunen, A.; Jokelainen, I.

    2001-01-01

    High accuracy in radiotherapy is required for the good outcome of the treatments, which in turn implies the need to develop comprehensive Quality Systems for the operation of the clinic. The legal requirements as well as the recommendation by professional societies support this modern approach for improved accuracy, safety and accident prevention. The actions of a national radiation protection authority can play an important role in this development. In this paper, the actions of the authority in Finland (STUK) for the control of the implementation of the new requirements are reviewed. It is concluded that the role of the authorities should not be limited to simple control actions, but comprehensive practical support for the development of the Quality Systems should be provided. (author)

  8. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  9. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Directory of Open Access Journals (Sweden)

    Oleksandr Makeyev

    2016-06-01

    Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  10. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Science.gov (United States)

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  11. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  12. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    Science.gov (United States)

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  13. Inclusion of Population-specific Reference Panel from India to the 1000 Genomes Phase 3 Panel Improves Imputation Accuracy.

    Science.gov (United States)

    Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R

    2017-07-27

    Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.

  14. Inference of Altimeter Accuracy on Along-track Gravity Anomaly Recovery

    Directory of Open Access Journals (Sweden)

    LI Yang

    2015-04-01

    Full Text Available A correlation model between along-track gravity anomaly accuracy, spatial resolution and altimeter accuracy is proposed. This new model is based on along-track gravity anomaly recovery and resolution estimation. Firstly, an error propagation formula of along-track gravity anomaly is derived from the principle of satellite altimetry. Then the mathematics between the SNR (signal to noise ratio and cross spectral coherence is deduced. The analytical correlation between altimeter accuracy and spatial resolution is finally obtained from the results above. Numerical simulation results show that along-track gravity anomaly accuracy is proportional to altimeter accuracy, while spatial resolution has a power relation with altimeter accuracy. e.g., with altimeter accuracy improving m times, gravity anomaly accuracy improves m times while spatial resolution improves m0.4644 times. This model is verified by real-world data.

  15. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    Science.gov (United States)

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety.

  16. A nondispersive X-ray spectrometer with dead time correction of great accuracy

    International Nuclear Information System (INIS)

    Guillon, H.; Friant, A.

    1976-01-01

    Processing the analog signals from an energy dispersive X-ray spectrometer requires a great number of functions to be assembled. Instead of using function modules, it was decided to build a unit intended for working out digital-input data to the mini-computer, from the signals delivered by the Si(Li) detector. The unit contains six cards intended for the following functions: main amplifier, stabilizer of the threshold level and pile-up detector, amplitude encoder, pulse generator and fast amplifier, chronometer with dead time correction and high voltage polarization [fr

  17. Test Expectancy Affects Metacomprehension Accuracy

    Science.gov (United States)

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  18. Secondary Signs May Improve the Diagnostic Accuracy of Equivocal Ultrasounds for Suspected Appendicitis in Children

    Science.gov (United States)

    Partain, Kristin N.; Patel, Adarsh; Travers, Curtis; McCracken, Courtney; Loewen, Jonathan; Braithwaite, Kiery; Heiss, Kurt F.; Raval, Mehul V.

    2016-01-01

    Introduction Ultrasound (US) is the preferred imaging modality for evaluating appendicitis. Our purpose was to determine if including secondary signs (SS) improves diagnostic accuracy in equivocal US studies. Methods Retrospective review identified 825 children presenting with concern for appendicitis and with a right lower quadrant (RLQ) US. Regression models identified which SS were associated with appendicitis. Test characteristics were demonstrated. Results 530 patients (64%) had equivocal US reports. Of 114 (22%) patients with equivocal US undergoing CT, those with SS were more likely to have appendicitis (48.6% vs 14.6%, pappendicitis (61.0% vs 33.6%, pappendicitis included fluid collection (adjusted odds ratio (OR) 13.3, 95% Confidence Interval (CI) 2.1–82.8), hyperemia (OR=2.0, 95%CI 1.5–95.5), free fluid (OR=9.8, 95%CI 3.8–25.4), and appendicolith (OR=7.9, 95%CI 1.7–37.2). Wall thickness, bowel peristalsis, and echogenic fat were not associated with appendicitis. Equivocal US that included hyperemia, a fluid collection, or an appendicolith had 96% specificity and 88% accuracy. Conclusion Use of SS in RLQ US assists in the diagnostic accuracy of appendicitis. SS may guide clinicians and reduce unnecessary CT and admissions. PMID:27039121

  19. Classification and Accuracy Assessment for Coarse Resolution Mapping within the Great Lakes Basin, USA

    Science.gov (United States)

    This study applied a phenology-based land-cover classification approach across the Laurentian Great Lakes Basin (GLB) using time-series data consisting of 23 Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) composite images (250 ...

  20. Geoid undulation accuracy

    Science.gov (United States)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  1. Accuracy improvement of CT reconstruction using tree-structured filter bank

    International Nuclear Information System (INIS)

    Ueda, Kazuhiro; Morimoto, Hiroaki; Morikawa, Yoshitaka; Murakami, Junichi

    2009-01-01

    Accuracy improvement of 'CT reconstruction algorithm using TSFB (Tree-Structured Filter Bank)' that is high-speed CT reconstruction algorithm, was proposed. TSFB method could largely reduce the amount of computation in comparison with the CB (Convolution Backprojection) method, but it was the problem that an artifact occurred in a reconstruction image since reconstruction was performed with disregard to a signal out of the reconstruction domain in stage processing. Also the whole band filter being the component of a two-dimensional synthesis filter was IIR filter and then an artifact occurred at the end of the reconstruction image. In order to suppress these artifacts the proposed method enlarged the processing range by the TSFB method in the domain outside by the width control of the specimen line and line addition to the reconstruction domain outside. And, furthermore, to avoid increase of the amount of computation, the algorithm was proposed such as to decide the needed processing range depending on the number of steps processing with the TSFB and the degree of incline of filter, and then update the position and width of the specimen line to process the needed range. According to the simulation to realize a high-speed and highly accurate CT reconstruction in this way, the quality of the reconstruction image of the proposed method was improved in comparison with the TSFB method and got the same result with the CB method. (T. Tanaka)

  2. The Single-Molecule Centroid Localization Algorithm Improves the Accuracy of Fluorescence Binding Assays.

    Science.gov (United States)

    Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip

    2018-03-13

    Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.

  3. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    Science.gov (United States)

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  4. Improving accuracy of breast cancer biomarker testing in India

    Directory of Open Access Journals (Sweden)

    Tanuja Shet

    2017-01-01

    Full Text Available There is a global mandate even in countries with low resources to improve the accuracy of testing biomarkers in breast cancer viz. oestrogen receptor (ER, progesterone receptor (PR and human epidermal growth factor receptor 2 (HER2neu given their critical impact in the management of patients. The steps taken include compulsory participation in an external quality assurance (EQA programme, centralized testing, and regular performance audits for laboratories. This review addresses the status of ER/PR and HER2neu testing in India and possible reasons for the delay in development of guidelines and mandate for testing in the country. The chief cause of erroneous ER and PR testing in India continues to be easily correctable issues such as fixation and antigen retrieval, while for HER2neu testing, it is the use of low-cost non-validated antibodies and interpretative errors. These deficiencies can however, be rectified by (i distributing the accountability and responsibility to surgeons and oncologist, (ii certification of centres for testing in oncology, and (iii initiation of a national EQA system (EQAS programme that will help with economical solutions and identifying the centres of excellence and instill a system for reprimand of poorly performing laboratories.

  5. Boosted classification trees result in minor to modest improvement in the accuracy in classifying cardiovascular outcomes compared to conventional classification trees

    Science.gov (United States)

    Austin, Peter C; Lee, Douglas S

    2011-01-01

    Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181

  6. Accuracy of subcutaneous continuous glucose monitoring in critically ill adults: improved sensor performance with enhanced calibrations.

    Science.gov (United States)

    Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman

    2014-02-01

    Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.

  7. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    Science.gov (United States)

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the

  8. Great improvement in pseudocapacitor properties of nickel hydroxide via simple gold deposition

    Science.gov (United States)

    Kim, Sun-I.; Thiyagarajan, Pradheep; Jang, Ji-Hyun

    2014-09-01

    In this letter, we report a facile approach to improve the capacitor properties of nickel hydroxide (Ni(OH)2) by simply coating gold nanoparticles (Au NPs) on the surface of Ni(OH)2. Au NP-deposited Ni(OH)2 (Au/Ni(OH)2) has been prepared by application of a conventional colloidal coating of Au NPs on the surface of 3D-Ni(OH)2 synthesized via a hydrothermal method. Compared with pristine Ni(OH)2, Au/Ni(OH)2 shows a 41% enhanced capacitance value, excellent rate capacitance behavior at high current density conditions, and greatly improved cycling stability for supercapacitor applications. The specific capacitance of Au/Ni(OH)2 reached 1927 F g-1 at 1 A g-1, which is close to the theoretical capacitance and retained 66% and 80% of the maximum value at a high current density of 20 A g-1 and 5000 cycles while that of pristine Ni(OH)2 was 1363 F g-1 and significantly decreased to 48% and 30%, respectively, under the same conditions. The outstanding performance of Au/Ni(OH)2 as a supercapacitor is attributed to the presence of metal Au NPs on the surface of semiconductor Ni(OH)2; this permits the creation of virtual 3D conducting networks via metal/semiconductor contact, which induces fast electron and ion transport by acting as a bridge between Ni(OH)2 nanostructures, thus eventually leading to significantly improved electrochemical capacitive behaviors, as confirmed by the EIS and I-V characteristic data.In this letter, we report a facile approach to improve the capacitor properties of nickel hydroxide (Ni(OH)2) by simply coating gold nanoparticles (Au NPs) on the surface of Ni(OH)2. Au NP-deposited Ni(OH)2 (Au/Ni(OH)2) has been prepared by application of a conventional colloidal coating of Au NPs on the surface of 3D-Ni(OH)2 synthesized via a hydrothermal method. Compared with pristine Ni(OH)2, Au/Ni(OH)2 shows a 41% enhanced capacitance value, excellent rate capacitance behavior at high current density conditions, and greatly improved cycling stability for

  9. Improving the accuracy of Møller-Plesset perturbation theory with neural networks

    Science.gov (United States)

    McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.

    2017-10-01

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  10. Understanding the delayed-keyword effect on metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  11. METACOGNITIVE SCAFFOLDS IMPROVE SELF-JUDGMENTS OF ACCURACY IN A MEDICAL INTELLIGENT TUTORING SYSTEM.

    Science.gov (United States)

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S

    2014-03-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one ( N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of Nodular and Diffuse Dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for two hours before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman-Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments.

  12. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    OpenAIRE

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spec...

  13. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  14. INFLUENCE OF STRUCTURE COMPONENTS ON MACHINE TOOL ACCURACY

    Directory of Open Access Journals (Sweden)

    ConstantinSANDU

    2017-11-01

    Full Text Available For machine tools, the accuracy of the parts of the machine tool structure (after roughing should be subject to relief and natural or artificial aging. The performance of the current accuracy of machine tools as linearity or flatness was higher than 5 μm/m. Under this value there are great difficulties. The performance of the structure of the machine tools in the manufacture of structural parts of machine tools, with a flatness accuracy that the linearity of about 2 μm/m, are significant deviations form of their half-finished. This article deals with the influence of errors of form of semifinished and machined parts on them, on their shape and especially what happens to structure machine tools when the components of the structure were assembling this.

  15. A simple and efficient methodology to improve geometric accuracy in gamma knife radiation surgery: implementation in multiple brain metastases.

    Science.gov (United States)

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, "average" image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Assessing the accuracy of TDR-based water leak detection system

    Science.gov (United States)

    Fatemi Aghda, S. M.; GanjaliPour, K.; Nabiollahi, K.

    2018-03-01

    The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years. In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases. The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points) showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points.

  17. Assessing the accuracy of TDR-based water leak detection system

    Directory of Open Access Journals (Sweden)

    S.M. Fatemi Aghda

    2018-03-01

    Full Text Available The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years.In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases.The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points. Keywords: Multiple leakage points, TDR, Reference points

  18. Intellijoint HIP®: a 3D mini-optical navigation tool for improving intraoperative accuracy during total hip arthroplasty

    Directory of Open Access Journals (Sweden)

    Paprosky WG

    2016-11-01

    Full Text Available Wayne G Paprosky,1,2 Jeffrey M Muir3 1Department of Orthopedics, Section of Adult Joint Reconstruction, Department of Orthopedics, Rush University Medical Center, Rush–Presbyterian–St Luke’s Medical Center, Chicago, 2Central DuPage Hospital, Winfield, IL, USA; 3Intellijoint Surgical, Inc, Waterloo, ON, Canada Abstract: Total hip arthroplasty is an increasingly common procedure used to address degenerative changes in the hip joint due to osteoarthritis. Although generally associated with good results, among the challenges associated with hip arthroplasty are accurate measurement of biomechanical parameters such as leg length, offset, and cup position, discrepancies of which can lead to significant long-term consequences such as pain, instability, neurological deficits, dislocation, and revision surgery, as well as patient dissatisfaction and, increasingly, litigation. Current methods of managing these parameters are limited, with manual methods such as outriggers or calipers being used to monitor leg length; however, these are susceptible to small intraoperative changes in patient position and are therefore inaccurate. Computer-assisted navigation, while offering improved accuracy, is expensive and cumbersome, in addition to adding significantly to procedural time. To address the technological gap in hip arthroplasty, a new intraoperative navigation tool (Intellijoint HIP® has been developed. This innovative, 3D mini-optical navigation tool provides real-time, intraoperative data on leg length, offset, and cup position and allows for improved accuracy and precision in component selection and alignment. Benchtop and simulated clinical use testing have demonstrated excellent accuracy, with the navigation tool able to measure leg length and offset to within <1 mm and cup position to within <1° in both anteversion and inclination. This study describes the indications, procedural technique, and early accuracy results of the Intellijoint HIP

  19. Improved mass-measurement accuracy using a PNB Load Cell Scale

    International Nuclear Information System (INIS)

    Suda, S.; Pontius, P.; Schoonover, R.

    1981-08-01

    The PNB Load Cell Scale is a Preloaded, Narrow-Band calibration mass comparator. It consists of (1) a frame and servo-mechanism that maintains a preload tension on the load cell until the load, an unknown mass, is sensed, and (2) a null-balance digital instrument that suppresses the cell response associated with the preload, thereby improving the precision and accuracy of the measurements. Ideally, the objects used to set the preload should be replica mass standards that closely approximate the density and mass of the unknowns. The advantages of the PNB scale are an expanded output signal over the range of interest which increases both the sensitivity and resolution, and minimizes the transient effects associated with loading of load cells. An area of immediate and practical application of this technique to nuclear material safeguards is the weighing of UF 6 cyliners where in-house mass standards are currently available and where the mass values are typically assigned on the basis of comparison weighings. Several prototypical versions of the PNB scale have been assembled at the US National Bureau of Standards. A description of the instrumentation, principles of measurements, and applications are presented in this paper

  20. Machine learning improves the accuracy of myocardial perfusion scintigraphy results

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Objective: Machine learning (ML) an artificial intelligence method has in last decade proved to be an useful tool in many fields of decision making, also in some fields of medicine. By reports, its decision accuracy usually exceeds the human one. Aim: To assess applicability of ML in interpretation of the stress myocardial perfusion scintigraphy results in coronary artery disease diagnostic process. Patients and methods: The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy of the investigation were computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate with whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy of scintigraphy were expressed in this way. The results of both decision procedures were compared. Conclusion: Using ML method, 19 more patients out of 327 (5.8%) were correctly diagnosed by stress myocardial perfusion scintigraphy. In this way ML could be an important tool for myocardial perfusion scintigraphy decision making

  1. Travel-time source-specific station correction improves location accuracy

    Science.gov (United States)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  2. Microbiogical data, but not procalcitonin improve the accuracy of the clinical pulmonary infection score.

    Science.gov (United States)

    Jung, Boris; Embriaco, Nathalie; Roux, François; Forel, Jean-Marie; Demory, Didier; Allardet-Servent, Jérôme; Jaber, Samir; La Scola, Bernard; Papazian, Laurent

    2010-05-01

    Early and adequate treatment of ventilator-associated pneumonia (VAP) is mandatory to improve the outcome. The aim of this study was to evaluate, in medical ICU patients, the respective and combined impact of the Clinical Pulmonary Infection Score (CPIS), broncho-alveolar lavage (BAL) gram staining, endotracheal aspirate and a biomarker (procalcitonin) for the early diagnosis of VAP. Prospective, observational study A medical intensive care unit in a teaching hospital. Over an 8-month period, we prospectively included 57 patients suspected of having 86 episodes of VAP. The day of suspicion, a BAL as well as alveolar and serum procalcitonin determinations and evaluation of CPIS were performed. Of 86 BAL performed, 48 were considered positive (cutoff of 10(4) cfu ml(-1)). We found no differences in alveolar or serum procalcitonin between VAP and non-VAP patients. Including procalcitonin in the CPIS score did not increase its accuracy (55%) for the diagnosis of VAP. The best tests to predict VAP were modified CPIS (threshold at 6) combined with microbiological data. Indeed, both routinely twice weekly performed endotracheal aspiration at a threshold of 10(5) cfu ml(-1) and BAL gram staining improved pre-test diagnostic accuracy of VAP (77 and 66%, respectively). This study showed that alveolar procalcitonin performed by BAL does not help the clinician to identify VAP. It confirmed that serum procalcitonin is not an accurate marker of VAP. In contrast, microbiological resources available at the time of VAP suspicion (BAL gram staining, last available endotracheal aspirate) combined or not with CPIS are helpful in distinguishing VAP diagnosed by BAL from patients with a negative BAL.

  3. Climate Change Observation Accuracy: Requirements and Economic Value

    Science.gov (United States)

    Wielicki, Bruce; Cooke, Roger; Golub, Alexander; Baize, Rosemary; Mlynczak, Martin; Lukashin, Constantin; Thome, Kurt; Shea, Yolanda; Kopp, Greg; Pilewskie, Peter; hide

    2016-01-01

    This presentation will summarize a new quantitative approach to determining the required accuracy for climate change observations. Using this metric, most current global satellite observations struggle to meet this accuracy level. CLARREO (Climate Absolute Radiance and Refractivity Observatory) is a new satellite mission designed to resolve this challenge is by achieving advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra. The CLARREO spectrometers can serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, etc). A CLARREO Pathfinder mission for flight on the International Space Station is included in the U.S. Presidentâ€"TM"s fiscal year 2016 budget, with launch in 2019 or 2020. Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A new study has been carried out to quantify the economic benefits of such an advance and concludes that the economic value is $9 Trillion U.S. dollars. The new value includes the cost of carbon emissions reductions.

  4. An improved recommendation algorithm via weakening indirect linkage effect

    International Nuclear Information System (INIS)

    Chen Guang; Qiu Tian; Shen Xiao-Quan

    2015-01-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. (paper)

  5. Improvement in reliability and accuracy of heater tube eddy current testing by integration with an appropriate destructive test

    International Nuclear Information System (INIS)

    Giovanelli, F.; Gabiccini, S.; Tarli, R.; Motta, P.

    1988-01-01

    A specially developed destructive test is described showing how the reliability and accuracy of a non-destructive technique can be improved if it is suitably accompanied by an appropriate destructive test. The experiment was carried out on samples of AISI 304L tubes from the low-pressure (LP) preheaters of a BWR 900 MW nuclear plant. (author)

  6. SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ng, C; Chan, S; Lee, F; Ngan, R [Queen Elizabeth Hospital (Hong Kong); Lee, V [University of Hong Kong, Hong Kong, HK (Hong Kong)

    2016-06-15

    Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done with FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.

  7. A New Image Processing Procedure Integrating PCI-RPC and ArcGIS-Spline Tools to Improve the Orthorectification Accuracy of High-Resolution Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Hongying Zhang

    2016-10-01

    Full Text Available Given the low accuracy of the traditional remote sensing image processing software when orthorectifying satellite images that cover mountainous areas, and in order to make a full use of mutually compatible and complementary characteristics of the remote sensing image processing software PCI-RPC (Rational Polynomial Coefficients and ArcGIS-Spline, this study puts forward a new operational and effective image processing procedure to improve the accuracy of image orthorectification. The new procedure first processes raw image data into an orthorectified image using PCI with RPC model (PCI-RPC, and then the orthorectified image is further processed using ArcGIS with the Spline tool (ArcGIS-Spline. We used the high-resolution CBERS-02C satellite images (HR1 and HR2 scenes with a pixel size of 2 m acquired from Yangyuan County in Hebei Province of China to test the procedure. In this study, when separately using PCI-RPC and ArcGIS-Spline tools directly to process the HR1/HR2 raw images, the orthorectification accuracies (root mean square errors, RMSEs for HR1/HR2 images were 2.94 m/2.81 m and 4.65 m/4.41 m, respectively. However, when using our newly proposed procedure, the corresponding RMSEs could be reduced to 1.10 m/1.07 m. The experimental results demonstrated that the new image processing procedure which integrates PCI-RPC and ArcGIS-Spline tools could significantly improve image orthorectification accuracy. Therefore, in terms of practice, the new procedure has the potential to use existing software products to easily improve image orthorectification accuracy.

  8. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    Science.gov (United States)

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    Science.gov (United States)

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  10. Species identification by conservation practitioners using online images: accuracy and agreement between experts

    Directory of Open Access Journals (Sweden)

    Gail E. Austen

    2018-01-01

    Full Text Available Emerging technologies have led to an increase in species observations being recorded via digital images. Such visual records are easily shared, and are often uploaded to online communities when help is required to identify or validate species. Although this is common practice, little is known about the accuracy of species identification from such images. Using online images of newts that are native and non-native to the UK, this study asked holders of great crested newt (Triturus cristatus licences (issued by UK authorities to permit surveying for this species to sort these images into groups, and to assign species names to those groups. All of these experts identified the native species, but agreement among these participants was low, with some being cautious in committing to definitive identifications. Individuals’ accuracy was also independent of both their experience and self-assessed ability. Furthermore, mean accuracy was not uniform across species (69–96%. These findings demonstrate the difficulty of accurate identification of newts from a single image, and that expert judgements are variable, even within the same knowledgeable community. We suggest that identification decisions should be made on multiple images and verified by more than one expert, which could improve the reliability of species data.

  11. Improved precision and speed of thermal ionisation mass spectrometry with a multicollector

    International Nuclear Information System (INIS)

    Turner, P.J.; Cantle, J.E.; Haines, R.C.

    1984-01-01

    The analytical accuracy of uranium analysis is limited by fractionation occuring before and during the data acquisition period. With peak jumping methods, increasing the data measuring period beyond 15 to 20 minutes can result in a deteriorating, internal precision because the ratio being measured is changing significantly. In addition the operator often has the choice of taking poor data when the beam is unstable or decaying rapidly or waiting for beam stability and measuring a more fractionated ratio to higher precision. Either way measurement accuracy is reduced. Simultaneous measurement of the ion beams using a multicollector greatly eases these difficulties. Multicollector analysis offers the following advantages for uranium analysis: (1) greatly reduced analysis time; (2) improved internal errors duing data acquisition allowing: (a) smaller samples to be measured, (b) greater dynamic ratios to be measured before internal errors become comparable to the external errors; (3) short data acquisition time giving better results on rapidly fractionating samples; (4) great tolerance to unstable and rapidly decaying beams resulting in fewer ''unsatisfactory'' analyses. 4 figs

  12. Forecasting space weather: Can new econometric methods improve accuracy?

    Science.gov (United States)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  13. Multi-scale hippocampal parcellation improves atlas-based segmentation accuracy

    Science.gov (United States)

    Plassard, Andrew J.; McHugo, Maureen; Heckers, Stephan; Landman, Bennett A.

    2017-02-01

    Known for its distinct role in memory, the hippocampus is one of the most studied regions of the brain. Recent advances in magnetic resonance imaging have allowed for high-contrast, reproducible imaging of the hippocampus. Typically, a trained rater takes 45 minutes to manually trace the hippocampus and delineate the anterior from the posterior segment at millimeter resolution. As a result, there has been a significant desire for automated and robust segmentation of the hippocampus. In this work we use a population of 195 atlases based on T1-weighted MR images with the left and right hippocampus delineated into the head and body. We initialize the multi-atlas segmentation to a region directly around each lateralized hippocampus to both speed up and improve the accuracy of registration. This initialization allows for incorporation of nearly 200 atlases, an accomplishment which would typically involve hundreds of hours of computation per target image. The proposed segmentation results in a Dice similiarity coefficient over 0.9 for the full hippocampus. This result outperforms a multi-atlas segmentation using the BrainCOLOR atlases (Dice 0.85) and FreeSurfer (Dice 0.75). Furthermore, the head and body delineation resulted in a Dice coefficient over 0.87 for both structures. The head and body volume measurements also show high reproducibility on the Kirby 21 reproducibility population (R2 greater than 0.95, p develop a robust tool for measurement of the hippocampus and other temporal lobe structures.

  14. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  15. The use of imprecise processing to improve accuracy in weather and climate prediction

    Energy Technology Data Exchange (ETDEWEB)

    Düben, Peter D., E-mail: dueben@atm.ox.ac.uk [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom); McNamara, Hugh [University of Oxford, Mathematical Institute (United Kingdom); Palmer, T.N. [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom)

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce

  16. The use of imprecise processing to improve accuracy in weather and climate prediction

    International Nuclear Information System (INIS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-01-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  17. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  18. Improvement in Interobserver Accuracy in Delineation of the Lumpectomy Cavity Using Fiducial Markers

    International Nuclear Information System (INIS)

    Shaikh, Talha; Chen Ting; Khan, Atif; Yue, Ning J.; Kearney, Thomas; Cohler, Alan; Haffty, Bruce G.; Goyal, Sharad

    2010-01-01

    Purpose: To determine, whether the presence of gold fiducial markers would improve the inter- and intraphysician accuracy in the delineation of the surgical cavity compared with a matched group of patients who did not receive gold fiducial markers in the setting of accelerated partial-breast irradiation (APBI). Methods and Materials: Planning CT images of 22 lumpectomy cavities were reviewed in a cohort of 22 patients; 11 patients received four to six gold fiducial markers placed at the time of surgery. Three physicians categorized the seroma cavity according to cavity visualization score criteria and delineated each of the 22 seroma cavities and the clinical target volume. Distance between centers of mass, percentage overlap, and average surface distance for all patients were assessed. Results: The mean seroma volume was 36.9 cm 3 and 34.2 cm 3 for fiducial patients and non-fiducial patients, respectively (p = ns). Fiducial markers improved the mean cavity visualization score, to 3.6 ± 1.0 from 2.5 ± 1.3 (p < 0.05). The mean distance between centers of mass, average surface distance, and percentage overlap for the seroma and clinical target volume were significantly improved in the fiducial marker patients as compared with the non-fiducial marker patients (p < 0.001). Conclusions: The placement of gold fiducial markers placed at the time of lumpectomy improves interphysician identification and delineation of the seroma cavity and clinical target volume. This has implications in radiotherapy treatment planning for accelerated partial-breast irradiation and for boost after whole-breast irradiation.

  19. Improved Accuracy of Myocardial Perfusion SPECT for the Detection of Coronary Artery Disease by Utilizing a Support Vector Machines Algorithm

    Science.gov (United States)

    Arsanjani, Reza; Xu, Yuan; Dey, Damini; Fish, Matthews; Dorbala, Sharmila; Hayes, Sean; Berman, Daniel; Germano, Guido; Slomka, Piotr

    2012-01-01

    We aimed to improve the diagnostic accuracy of automatic myocardial perfusion SPECT (MPS) interpretation analysis for prediction of coronary artery disease (CAD) by integrating several quantitative perfusion and functional variables for non-corrected (NC) data by support vector machines (SVM), a computer method for machine learning. Methods 957 rest/stress 99mtechnetium gated MPS NC studies from 623 consecutive patients with correlating invasive coronary angiography and 334 with low likelihood of CAD (LLK < 5% ) were assessed. Patients with stenosis ≥ 50% in left main or ≥ 70% in all other vessels were considered abnormal. Total perfusion deficit (TPD) was computed automatically. In addition, ischemic changes (ISCH) and ejection fraction changes (EFC) between stress and rest were derived by quantitative software. The SVM was trained using a group of 125 pts (25 LLK, 25 0-, 25 1-, 25 2- and 25 3-vessel CAD) using above quantitative variables and second order polynomial fitting. The remaining patients (N = 832) were categorized based on probability estimates, with CAD defined as (probability estimate ≥ 0.50). The diagnostic accuracy of SVM was also compared to visual segmental scoring by two experienced readers. Results Sensitivity of SVM (84%) was significantly better than ISCH (75%, p < 0.05) and EFC (31%, p < 0.05). Specificity of SVM (88%) was significantly better than that of TPD (78%, p < 0.05) and EFC (77%, p < 0.05). Diagnostic accuracy of SVM (86%) was significantly better than TPD (81%), ISCH (81%), or EFC (46%) (p < 0.05 for all). The Receiver-operator-characteristic area-under-the-curve (ROC-AUC) for SVM (0.92) was significantly better than TPD (0.90), ISCH (0.87), and EFC (0.60) (p < 0.001 for all). Diagnostic accuracy of SVM was comparable to the overall accuracy of both visual readers (85% vs. 84%, p < 0.05). ROC-AUC for SVM (0.92) was significantly better than that of both visual readers (0.87 and 0.88, p < 0.03). Conclusion Computational

  20. Collaboration between radiological technologists (radiographers) and junior doctors during image interpretation improves the accuracy of diagnostic decisions

    International Nuclear Information System (INIS)

    Kelly, B.S.; Rainford, L.A.; Gray, J.; McEntee, M.F.

    2012-01-01

    Rationale and Objectives: In Emergency Departments (ED) junior doctors regularly make diagnostic decisions based on radiographic images. This study investigates whether collaboration between junior doctors and radiographers impacts on diagnostic accuracy. Materials and Methods: Research was carried out in the ED of a university teaching hospital and included 10 pairs of participants. Radiographers and junior doctors were shown 42 wrist radiographs and 40 CT Brains and were asked for their level of confidence of the presence or absence of distal radius fractures or fresh intracranial bleeds respectively using ViewDEX software, first working alone and then in pairs. Receiver Operating Characteristic was used to analyze performance. Results were compared using one-way analysis of variance. Results: The results showed statistically significant improvements in the Area Under the Curve (AUC) of the junior doctors when working with the radiographers for both sets of images (wrist and CT) treated as random readers and cases (p ≤ 0.008 and p ≤ 0.0026 respectively). While the radiographers’ results saw no significant changes, their mean Az values did show an increasing trend when working in collaboration. Conclusion: Improvement in performance of junior doctors following collaboration strongly suggests changes in the potential to improve accuracy of patient diagnosis and therefore patient care. Further training for junior doctors in the interpretation of diagnostic images should also be considered. Decision making of junior doctors was positively impacted on after introducing the opinion of a radiographer. Collaboration exceeds the sum of the parts; the two professions are better together.

  1. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  2. Accuracy Assessment and Analysis for GPT2

    Directory of Open Access Journals (Sweden)

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  3. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  4. Optimizing hyaluronidase dose and plasmid DNA delivery greatly improves gene electrotransfer efficiency in rat skeletal muscle

    DEFF Research Database (Denmark)

    Åkerström, Thorbjörn; Vedel, Kenneth; Needham Andersen, Josefine

    2015-01-01

    Transfection of rat skeletal muscle in vivo is a widely used research model. However, gene electrotransfer protocols have been developed for mice and yield variable results in rats. We investigated whether changes in hyaluronidase pre-treatment and plasmid DNA delivery can improve transfection...... with a homogenous distribution. We also show that transfection was stable over five weeks of regular exercise or inactivity. Our findings show that species-specific plasmid DNA delivery and hyaluronidase pre-treatment greatly improves transfection efficiency in rat skeletal muscle....... efficiency in rat skeletal muscle. We found that pre-treating the muscle with a hyaluronidase dose suitable for rats (0.56. U/g b.w.) prior to plasmid DNA injection increased transfection efficiency by >200% whereas timing of the pre-treatment did not affect efficiency. Uniformly distributing plasmid DNA...

  5. Accuracy of Digital vs. Conventional Implant Impressions

    Science.gov (United States)

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  6. Open magnetic resonance imaging using titanium-zirconium needles: improved accuracy for interstitial brachytherapy implants?

    International Nuclear Information System (INIS)

    Popowski, Youri; Hiltbrand, Emile; Joliat, Dominique; Rouzaud, Michel

    2000-01-01

    Purpose: To evaluate the benefit of using an open magnetic resonance (MR) machine and new MR-compatible needles to improve the accuracy of brachytherapy implants in pelvic tumors. Methods and Materials: The open MR machine, foreseen for interventional procedures, allows direct visualization of the pelvic structures that are to be implanted. For that purpose, we have developed MR- and CT-compatible titanium-zirconium (Ti-Zr) brachytherapy needles that allow implantations to be carried out under the magnetic field. In order to test the technical feasibility of this new approach, stainless steel (SS) and Ti-Zr needles were first compared in a tissue-equivalent phantom. In a second step, two patients implanted with Ti-Zr needles in the brachytherapy operating room were scanned in the open MR machine. In a third phase, four patients were implanted directly under open MR control. Results: The artifacts induced by both materials were significantly different, strongly favoring the Ti-Zr needles. The implantation in both first patients confirmed the excellent quality of the pictures obtained with the needles in vivo and showed suboptimal implant geometry in both patients. In the next 4 patients, the tumor could be punctured with excellent accuracy, and the adjacent structures could be easily avoided. Conclusion: We conclude that open MR using MR-compatible needles is a very promising tool in brachytherapy, especially for pelvic tumors

  7. A review on the processing accuracy of two-photon polymerization

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2015-03-01

    Full Text Available Two-photon polymerization (TPP is a powerful and potential technology to fabricate true three-dimensional (3D micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  8. A review on the processing accuracy of two-photon polymerization

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xiaoqin; Hou, Yihong [School of Mechanical Science and Engineering, Jilin University, Changchun, 130022 (China); Lin, Jieqiong, E-mail: linjieqiong@mail.ccut.edu.cn [School of Electromechanical Engineering, Changchun University of Technology, Changchun, 130012 (China)

    2015-03-15

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  9. Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.

    Science.gov (United States)

    Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo

    2014-01-01

    The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.

  10. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  11. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.; Grigor'ev, E.P.

    2002-01-01

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total γ-ray spectra from the (n,γ) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade γ-decay of heavy nuclei. (author)

  12. Improved accuracy of component alignment with the implementation of image-free navigation in total knee arthroplasty.

    Science.gov (United States)

    Rosenberger, Ralf E; Hoser, Christian; Quirbach, Sebastian; Attal, Rene; Hennerbichler, Alfred; Fink, Christian

    2008-03-01

    Accuracy of implant positioning and reconstruction of the mechanical leg axis are major requirements for achieving good long-term results in total knee arthroplasty (TKA). The purpose of the present study was to determine whether image-free computer navigation technology has the potential to improve the accuracy of component alignment in TKA cohorts of experienced surgeons immediately and constantly. One hundred patients with primary arthritis of the knee underwent the unilateral total knee arthroplasty. The cohort of 50 TKAs implanted with conventional instrumentation was directly followed by the cohort of the very first 50 computer-assisted TKAs. All surgeries were performed by two senior surgeons. All patients received the Zimmer NexGen total knee prosthesis (Zimmer Inc., Warsaw, IN, USA). There was no variability regarding surgeons or surgical technique, except for the use of the navigation system (StealthStation) Treon plus Medtronic Inc., Minnesota, MI, USA). Accuracy of implant positioning was measured on postoperative long-leg standing radiographs and standard lateral X-rays with regard to the valgus angle and the coronal and sagittal component angles. In addition, preoperative deformities of the mechanical leg axis, tourniquet time, age, and gender were correlated. Statistical analyses were performed using the SPSS 15.0 (SPSS Inc., Chicago, IL, USA) software package. Independent t-tests were used, with significance set at P alignment between the two cohorts. To compute the rate of optimally implanted prostheses between the two groups we used the chi(2) test. The average postoperative radiological frontal mechanical alignment was 1.88 degrees of varus (range 6.1 degrees of valgus-10.1 degrees of varus; SD 3.68 degrees ) in the conventional cohort and 0.28 degrees of varus (range 3.7 degrees -6.0 degrees of varus; SD 1.97 degrees ) in the navigated cohort. Including all criteria for optimal implant alignment, 16 cases (32%) in the conventional cohort and 31

  13. Controlled Substance Reconciliation Accuracy Improvement Using Near Real-Time Drug Transaction Capture from Automated Dispensing Cabinets.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry

    2016-06-01

    Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014

  14. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    Science.gov (United States)

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  15. Test expectancy affects metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  16. Improvement of Accuracy in Flow Immunosensor System by Introduction of Poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl Phosphate

    Directory of Open Access Journals (Sweden)

    Yusuke Fuchiwaki

    2011-01-01

    Full Text Available In order to improve the accuracy of immunosensor systems, poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl phosphate (poly-3MAm3AP, which includes both phosphorylcholine and amino groups, was synthesized and applied to the preparation of antibody-immobilized beads. Acting as an antibody-immobilizing material, poly-3MAm3AP is expected to significantly lower nonspecific adsorption due to the presence of the phosphorylcholine group and recognize large numbers of analytes due to the increase in antibody-immobilizing sites. The elimination of nonspecific adsorption was compared between the formation of a blocking layer on antibody-immobilized beads and the introduction of a material to combine antibody with beads. Determination with specific and nonspecific antibodies was then investigated for the estimation of signal-to-noise ratio. Signal intensities with superior signal-to-noise ratios were obtained when poly-3MAm3AP was introduced. This may be due to the increase in antibody-immobilizing sites and the extended space for antigen-antibody interaction resulting from the electrostatic repulsion of poly-3MAm3AP. Thus, the application of poly-3MAm3AP coatings to immunoassay beads was able to improve the accuracy of flow immunosensor systems.

  17. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    International Nuclear Information System (INIS)

    Deeley, M A; Chen, A; Cmelak, A; Malcolm, A; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Datteri, R D; Noble, J; Dawant, B M; Donnelly, E; Moretti, L

    2013-01-01

    Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. (paper)

  18. Coval: improving alignment quality and variant calling accuracy for next-generation sequencing data.

    Directory of Open Access Journals (Sweden)

    Shunichi Kosugi

    Full Text Available Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in 'targeted' alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/.

  19. Improving the accuracy of brain tumor surgery via Raman-based technology.

    Science.gov (United States)

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W; Sunney Xie, X; Orringer, Daniel A

    2016-03-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors.

  20. Merits of using color and shape differentiation to improve the speed and accuracy of drug strength identification on over-the-counter medicines by laypeople.

    Science.gov (United States)

    Hellier, Elizabeth; Tucker, Mike; Kenny, Natalie; Rowntree, Anna; Edworthy, Judy

    2010-09-01

    This study aimed to examine the utility of using color and shape to differentiate drug strength information on over-the-counter medicine packages. Medication errors are an important threat to patient safety, and confusions between drug strengths are a significant source of medication error. A visual search paradigm required laypeople to search for medicine packages of a particular strength from among distracter packages of different strengths, and measures of reaction time and error were recorded. Using color to differentiate drug strength information conferred an advantage on search times and accuracy. Shape differentiation did not improve search times and had only a weak effect on search accuracy. Using color to differentiate drug strength information improves drug strength identification performance. Color differentiation of drug strength information may be a useful way of reducing medication errors and improving patient safety.

  1. Great Books. What Works Clearinghouse Intervention Report

    Science.gov (United States)

    What Works Clearinghouse, 2011

    2011-01-01

    "Great Books" is a program that aims to improve the reading, writing, and critical thinking skills of students in kindergarten through high school. The program is implemented as a core or complementary curriculum and is based on the Shared Inquiry[TM] method of learning. The purpose of "Great Books" is to engage students in…

  2. Establishment of the laser induced breakdown spectroscopy in a vacuum atmosphere for a accuracy improvement

    International Nuclear Information System (INIS)

    Kim, Seung Hyun; Kim, H. D.; Shin, H. S.

    2009-06-01

    This report describes the fundamentals of the Laser Induced Breakdown Spectroscopy(LIBS), and it describes the quantitative analysis method in the vacuum condition to obtain a high measurement accuracy. The LIBS system employs the following major components: a pulsed laser, a gas chamber, an emission spectrometer, a detector, and a computer. When the output from a pulsed laser is focused onto a small spot on a sample, an optically induced plasma, called a laser-induced plasma (LIP) is formed at the surface. The LIBS is a laser-based sensitive optical technique used to detect certain atomic and molecular species by monitoring the emission signals from a LIP. This report was described a fundamentals of the LIBS and current states of research. And, It was described a optimization of measurement condition and characteristic analysis of a LIP by measurement of the fundamental metals. The LIBS system shows about a 0.63 ∼ 5.82% measurement errors and calibration curve for the 'Cu, Cr and Ni'. It also shows about a 5% less of a measurement errors and calibration curve for a Nd and Sm. As a result, the LIBS accuracy for a part was little improved than preexistence by the optimized condition

  3. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    Directory of Open Access Journals (Sweden)

    Deborah A. Kerr

    2015-05-01

    Full Text Available The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24, were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01, detailed description (p < 0.05 and portion size matching (p < 0.05. Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods. The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05 and second recall (10.1% ± 20.8% compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days.

  4. Introducing radiology report checklists among residents: adherence rates when suggesting versus requiring their use and early experience in improving accuracy.

    Science.gov (United States)

    Powell, Daniel K; Lin, Eaton; Silberzweig, James E; Kagetsu, Nolan J

    2014-03-01

    To retrospectively compare resident adherence to checklist-style structured reporting for maxillofacial computed tomography (CT) from the emergency department (when required vs. suggested between two programs). To compare radiology resident reporting accuracy before and after introduction of the structured report and assess its ability to decrease the rate of undetected pathology. We introduced a reporting checklist for maxillofacial CT into our dictation software without specific training, requiring it at one program and suggesting it at another. We quantified usage among residents and compared reporting accuracy, before and after counting and categorizing faculty addenda. There was no significant change in resident accuracy in the first few months, with residents acting as their own controls (directly comparing performance with and without the checklist). Adherence to the checklist at program A (where it originated and was required) was 85% of reports compared to 9% of reports at program B (where it was suggested). When using program B as a secondary control, there was no significant difference in resident accuracy with or without using the checklist (comparing different residents using the checklist to those not using the checklist). Our results suggest that there is no automatic value of checklists for improving radiology resident reporting accuracy. They also suggest the importance of focused training, checklist flexibility, and a period of adjustment to a new reporting style. Mandatory checklists were readily adopted by residents but not when simply suggested. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  5. Climatology analysis of cirrus cloud in ARM site: South Great Plain

    Science.gov (United States)

    Olayinka, K.

    2017-12-01

    Cirrus cloud play an important role in the atmospheric energy balance and hence in the earth's climate system. The properties of optically thin clouds can be determined from measurements of transmission of the direct solar beam. The accuracy of cloud optical properties determined in this way is compromised by contamination of the direct transmission by light that is scattered into the sensors field of view. With the forward scattering correction method developed by Min et al., (2004), the accuracy of thin cloud retrievals from MFRSR has been improved. Our result shows over 30% of cirrus cloud present in the atmosphere are within optical depth between (1-2). In this study, we do statistics studies on cirrus clouds properties based on multi-years cirrus cloud measurements from MFRSR at ARM site from the South Great Plain (SGP) site due to its relatively easy accessibility, wide variability of climate cloud types and surface flux properties, large seasonal variation in temperature and specific humidity. Through the statistic studies, temporal and spatial variations of cirrus clouds are investigated. Since the presence of cirrus cloud increases the effect of greenhouse gases, we will retrieve the aerosol optical depth in all the cirrus cloud regions using a radiative transfer model for atmospheric correction. Calculate thin clouds optical depth (COD), and aerosol optical depth (AOD) using a radiative transfer model algorithm, e.g.: MODTRAN (MODerate resolution atmospheric TRANsmission)

  6. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    Science.gov (United States)

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Going Vertical To Improve the Accuracy of Atomic Force Microscopy Based Single-Molecule Force Spectroscopy.

    Science.gov (United States)

    Walder, Robert; Van Patten, William J; Adhikari, Ayush; Perkins, Thomas T

    2018-01-23

    Single-molecule force spectroscopy (SMFS) is a powerful technique to characterize the energy landscape of individual proteins, the mechanical properties of nucleic acids, and the strength of receptor-ligand interactions. Atomic force microscopy (AFM)-based SMFS benefits from ongoing progress in improving the precision and stability of cantilevers and the AFM itself. Underappreciated is that the accuracy of such AFM studies remains hindered by inadvertently stretching molecules at an angle while measuring only the vertical component of the force and extension, degrading both measurements. This inaccuracy is particularly problematic in AFM studies using double-stranded DNA and RNA due to their large persistence length (p ≈ 50 nm), often limiting such studies to other SMFS platforms (e.g., custom-built optical and magnetic tweezers). Here, we developed an automated algorithm that aligns the AFM tip above the DNA's attachment point to a coverslip. Importantly, this algorithm was performed at low force (10-20 pN) and relatively fast (15-25 s), preserving the connection between the tip and the target molecule. Our data revealed large uncorrected lateral offsets for 100 and 650 nm DNA molecules [24 ± 18 nm (mean ± standard deviation) and 180 ± 110 nm, respectively]. Correcting this offset yielded a 3-fold improvement in accuracy and precision when characterizing DNA's overstretching transition. We also demonstrated high throughput by acquiring 88 geometrically corrected force-extension curves of a single individual 100 nm DNA molecule in ∼40 min and versatility by aligning polyprotein- and PEG-based protein-ligand assays. Importantly, our software-based algorithm was implemented on a commercial AFM, so it can be broadly adopted. More generally, this work illustrates how to enhance AFM-based SMFS by developing more sophisticated data-acquisition protocols.

  8. Improvement of CD-SEM mark position measurement accuracy

    Science.gov (United States)

    Kasa, Kentaro; Fukuhara, Kazuya

    2014-04-01

    CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.

  9. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors.

    Science.gov (United States)

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02.

  10. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    Science.gov (United States)

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  11. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Science.gov (United States)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  12. Factors Associated With Worsened or Improved Mental Health in the Great East Japan Earthquake Survivors.

    Science.gov (United States)

    Yamanouchi, Tomoko; Hiroshima, Mayo; Takeuchi, Yumiko; Sawada, Yumiko; Takahashi, Makiko; Amagai, Manami

    2018-02-01

    The aim of this study was to identify factors contributing to the worsening or improved mental health of long-term evacuees over three years following the Great East Japan Earthquake. The Japanese version of the K6 questionnaire was used as a measure of mental health. The first- and third-year survey results were compared and differences in mental health status calculated. Respondents were then divided into two groups according to worsening or improved mental health status. Differences in stress factors, stress relief methods, and demographics were compared between the two groups. Factors associated with exacerbation of poor mental health were the stress factors "Uncertainty about future" (p=0.048) and "Loss of purpose in life" (p=0.023). Multivariable analysis identified two factors associated with improved mental health, the stress relief methods "Accepting myself" (odds ratio (OR): 2.15, 95% confidence interval (CI): 1.02-4.51) and "Interactions with others" (OR: 3.34, 95% CI: 1.43-7.79). While motivation and hope of livelihood reconstruction have gradually risen in the three years since the disaster, anxieties about an uncertain future, loss of purpose in life, and disruption of social networks continue adversely to affect the mental health of survivors. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Science.gov (United States)

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  14. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    Science.gov (United States)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  15. TCS: a new multiple sequence alignment reliability measure to estimate alignment accuracy and improve phylogenetic tree reconstruction.

    Science.gov (United States)

    Chang, Jia-Ming; Di Tommaso, Paolo; Notredame, Cedric

    2014-06-01

    Multiple sequence alignment (MSA) is a key modeling procedure when analyzing biological sequences. Homology and evolutionary modeling are the most common applications of MSAs. Both are known to be sensitive to the underlying MSA accuracy. In this work, we show how this problem can be partly overcome using the transitive consistency score (TCS), an extended version of the T-Coffee scoring scheme. Using this local evaluation function, we show that one can identify the most reliable portions of an MSA, as judged from BAliBASE and PREFAB structure-based reference alignments. We also show how this measure can be used to improve phylogenetic tree reconstruction using both an established simulated data set and a novel empirical yeast data set. For this purpose, we describe a novel lossless alternative to site filtering that involves overweighting the trustworthy columns. Our approach relies on the T-Coffee framework; it uses libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. We compared TCS with Heads-or-Tails, GUIDANCE, Gblocks, and trimAl and found it to lead to significantly better estimates of structural accuracy and more accurate phylogenetic trees. The software is available from www.tcoffee.org/Projects/tcs. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    Directory of Open Access Journals (Sweden)

    Xiaoli Ding

    2009-02-01

    Full Text Available Interferometric Synthetic Aperture Radar (InSAR is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  17. How could the replica method improve accuracy of performance assessment of channel coding?

    Energy Technology Data Exchange (ETDEWEB)

    Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of technology, Yokohama 226-8502 (Japan)], E-mail: kaba@dis.titech.ac.jp

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  18. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    International Nuclear Information System (INIS)

    D’Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-01-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  19. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Energy Technology Data Exchange (ETDEWEB)

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  20. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  1. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    Science.gov (United States)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  2. Patient-specific guides do not improve accuracy in total knee arthroplasty: a prospective randomized controlled trial.

    Science.gov (United States)

    Victor, Jan; Dujardin, Jan; Vandenneucker, Hilde; Arnout, Nele; Bellemans, Johan

    2014-01-01

    Recently, patient-specific guides (PSGs) have been introduced, claiming a significant improvement in accuracy and reproducibility of component positioning in TKA. Despite intensive marketing by the manufacturers, this claim has not yet been confirmed in a controlled prospective trial. We (1) compared three-planar component alignment and overall coronal mechanical alignment between PSG and conventional instrumentation and (2) logged the need for applying changes in the suggested position of the PSG. In this randomized controlled trial, we enrolled 128 patients. In the PSG cohort, surgical navigation was used as an intraoperative control. When the suggested cut deviated more than 3° from target, the use of PSG was abandoned and marked as an outlier. When cranial-caudal position or size was adapted, the PSG was marked as modified. All patients underwent long-leg standing radiography and CT scan. Deviation of more than 3° from the target in any plane was defined as an outlier. The PSG and conventional cohorts showed similar numbers of outliers in overall coronal alignment (25% versus 28%; p = 0.69), femoral coronal alignment (7% versus 14%) (p = 0.24), and femoral axial alignment (23% versus 17%; p = 0.50). There were more outliers in tibial coronal (15% versus 3%; p = 0.03) and sagittal 21% versus 3%; p = 0.002) alignment in the PSG group than in the conventional group. PSGs were abandoned in 14 patients (22%) and modified in 18 (28%). PSGs do not improve accuracy in TKA and, in our experience, were somewhat impractical in that the procedure needed to be either modified or abandoned with some frequency.

  3. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    Science.gov (United States)

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  4. Why greatness cannot be planned the myth of the objective

    CERN Document Server

    Stanley, Kenneth O

    2015-01-01

    Why does modern life revolve around objectives? From how science is funded, to improving how children are educated -- and nearly everything in-between -- our society has become obsessed with a seductive illusion: that greatness results from doggedly measuring improvement in the relentless pursuit of an ambitious goal. In Why Greatness Cannot Be Planned, Stanley and Lehman begin with a surprising scientific discovery in artificial intelligence that leads ultimately to the conclusion that the objective obsession has gone too far. They make the case that great achievement can't be bottled up int

  5. Intensive precipitation observation greatly improves hydrological modelling of the poorly gauged high mountain Mabengnong catchment in the Tibetan Plateau

    Science.gov (United States)

    Wang, Li; Zhang, Fan; Zhang, Hongbo; Scott, Christopher A.; Zeng, Chen; Shi, Xiaonan

    2018-01-01

    Precipitation is one of the most critical inputs for models used to improve understanding of hydrological processes. In high mountain areas, it is challenging to generate a reliable precipitation data set capturing the spatial and temporal heterogeneity due to the harsh climate, extreme terrain and the lack of observations. This study conducts intensive observation of precipitation in the Mabengnong catchment in the southeast of the Tibetan Plateau during July to August 2013. Because precipitation is greatly influenced by altitude, the observed data are used to characterize the precipitation gradient (PG) and hourly distribution (HD), showing that the average PG is 0.10, 0.28 and 0.26 mm/d/100 m and the average duration is around 0.1, 0.8 and 5.2 h for trace, light and moderate rain, respectively. A distributed biosphere hydrological model based on water and energy budgets with improved physical process for snow (WEB-DHM-S) is applied to simulate the hydrological processes with gridded precipitation data derived from a lower altitude meteorological station and the PG and HD characterized for the study area. The observed runoff, MODIS/Terra snow cover area (SCA) data, and MODIS/Terra land surface temperature (LST) data are used for model calibration and validation. Runoff, SCA and LST simulations all show reasonable results. Sensitivity analyses illustrate that runoff is largely underestimated without considering PG, indicating that short-term intensive precipitation observation has the potential to greatly improve hydrological modelling of poorly gauged high mountain catchments.

  6. Accuracy Analysis of a Box-wing Theoretical SRP Model

    Science.gov (United States)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  7. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  8. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Directory of Open Access Journals (Sweden)

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  9. MO-DE-210-05: Improved Accuracy of Liver Feature Motion Estimation in B-Mode Ultrasound for Image-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, T; Bamber, J; Harris, E [The Institute of Cancer Research & Royal Marsden, Sutton and London (United Kingdom)

    2015-06-15

    Purpose: In similarity-measure based motion estimation incremental tracking (or template update) is challenging due to quantization, bias and accumulation of tracking errors. A method is presented which aims to improve the accuracy of incrementally tracked liver feature motion in long ultrasound sequences. Methods: Liver ultrasound data from five healthy volunteers under free breathing were used (15 to 17 Hz imaging rate, 2.9 to 5.5 minutes in length). A normalised cross-correlation template matching algorithm was implemented to estimate tissue motion. Blood vessel motion was manually annotated for comparison with three tracking code implementations: (i) naive incremental tracking (IT), (ii) IT plus a similarity threshold (ST) template-update method and (iii) ST coupled with a prediction-based state observer, known as the alpha-beta filter (ABST). Results: The ABST method produced substantial improvements in vessel tracking accuracy for two-dimensional vessel motion ranging from 7.9 mm to 40.4 mm (with mean respiratory period: 4.0 ± 1.1 s). The mean and 95% tracking errors were 1.6 mm and 1.4 mm, respectively (compared to 6.2 mm and 9.1 mm, respectively for naive incremental tracking). Conclusions: High confidence in the output motion estimation data is required for ultrasound-based motion estimation for radiation therapy beam tracking and gating. The method presented has potential for monitoring liver vessel translational motion in high frame rate B-mode data with the required accuracy. This work is support by Cancer Research UK Programme Grant C33589/A19727.

  10. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy.

    Science.gov (United States)

    Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A

    2013-02-01

    for the weighted S-TPS-RPM. The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  11. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    International Nuclear Information System (INIS)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-01-01

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9–14 mm after rigid bone match to 0.9–4.0 mm, compared to a range of 1.1–9.1 mm with S-TPS-RPM of bladder alone and 0.9–9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  12. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  13. A simple method to improve the quantification accuracy of energy-dispersive X-ray microanalysis

    International Nuclear Information System (INIS)

    Walther, T

    2008-01-01

    Energy-dispersive X-ray spectroscopy in a transmission electron microscope is a standard tool for chemical microanalysis and routinely provides qualitative information on the presence of all major elements above Z=5 (boron) in a sample. Spectrum quantification relies on suitable corrections for absorption and fluorescence, in particular for thick samples and soft X-rays. A brief presentation is given of an easy way to improve quantification accuracy by evaluating the intensity ratio of two measurements acquired at different detector take-off angles. As the take-off angle determines the effective sample thickness seen by the detector this method corresponds to taking two measurements from the same position at two different thicknesses, which allows to correct absorption and fluorescence more reliably. An analytical solution for determining the depth of a feature embedded in the specimen foil is also provided.

  14. Improved accuracy in the estimation of blood velocity vectors using matched filtering

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Gori, P.

    2000-01-01

    the flow and the ultrasound beam (30, 45, 60, and 90 degrees). The parabolic flow has a peak velocity of 0.5 m/s and the pulse repetition frequency is 3.5 kHz. Simulating twenty emissions and calculating the cross-correlation using four pulse-echo lines for each estimate, the parabolic flow profile...... is found with a standard deviation of 0.014 m/s at 45 degrees (corresponding to an accuracy of 2.8%) and 0.022 m/s (corresponding to an accuracy of 4.4%) at 90 degrees, which is transverse to the ultrasound beam....

  15. Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites

    Directory of Open Access Journals (Sweden)

    Ming Xu

    2012-10-01

    Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.

  16. Increasing of AC compensation method accuracy

    International Nuclear Information System (INIS)

    Havlicek, V.; Pokorny, M.

    2003-01-01

    The original MMF compensation method allows the magnetic properties of single sheets and strips to be measured in the same way as the closed specimen properties. The accuracy of the method is limited due to the finite gain of the feedback loop fulfilling the condition of its stability. Digitalisation of the compensation loop appropriate processing of the error signal can rapidly improve the accuracy. The basic ideas of this new approach and the experimental results are described in this paper

  17. Increasing of AC compensation method accuracy

    Science.gov (United States)

    Havlíček, V.; Pokorný, M.

    2003-01-01

    The original MMF compensation method allows the magnetic properties of single sheets and strips to be measured in the same way as the closed specimen properties. The accuracy of the method is limited due to the finite gain of the feedback loop fulfilling the condition of its stability. Digitalisation of the compensation loop appropriate processing of the error signal can rapidly improve the accuracy. The basic ideas of this new approach and the experimental results are described in this paper.

  18. Electron ray tracing with high accuracy

    International Nuclear Information System (INIS)

    Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.

    1986-01-01

    An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens

  19. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    Science.gov (United States)

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  20. High accuracy prediction of beta-turns and their types using propensities and multiple alignments.

    Science.gov (United States)

    Fuchs, Patrick F J; Alix, Alain J P

    2005-06-01

    We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym

  1. Comparison between the accuracies of a new discretization method and an improved Fourier method to evaluate heat transfers between soil and atmosphere

    International Nuclear Information System (INIS)

    Hechinger, E.; Raffy, M.; Becker, F.

    1982-01-01

    To improve and evaluate the accuracy of Fourier methods for the analysis of the energy exchanges between soil and atmosphere, we have developed first a Fourier method that takes into account the nonneutrality corrections and the time variation of the air temperature and which improves the linearization procedures and, second a new discretization method that does not imply any linearization. The Fourier method, which gives the exact solution of an approximated problem, turns out to have the same order of accuracy as the discretization method, which gives an approximate solution of the exact problem. These methods reproduce the temperatures and fluxes predicted by the Tergra model as well as another set of experimental surface temperatures. In its present form, the Fourier method leads to results that become less accurate (mainly for low wind speeds) under certain conditions, namely, as the amplitude of the daily variation of the air and surface temperatures and their differences increase and as the relative humidities of the air at about 2 m and at the soil surface differ. Nevertheless, the results may be considered as generally satisfactory. Possible improvements of the Fourier model are discussed

  2. Can use of an administrative database improve accuracy of hospital-reported readmission rates?

    Science.gov (United States)

    Edgerton, James R; Herbert, Morley A; Hamman, Baron L; Ring, W Steves

    2018-05-01

    Readmission rates after cardiac surgery are being used as a quality indicator; they are also being collected by Medicare and are tied to reimbursement. Accurate knowledge of readmission rates may be difficult to achieve because patients may be readmitted to different hospitals. In our area, 81 hospitals share administrative claims data; 28 of these hospitals (from 5 different hospital systems) do cardiac surgery and share Society of Thoracic Surgeons (STS) clinical data. We used these 2 sources to compare the readmissions data for accuracy. A total of 45,539 STS records from January 2008 to December 2016 were matched with the hospital billing data records. Using the index visit as the start date, the billing records were queried for any subsequent in-patient visits for that patient. The billing records included date of readmission and hospital of readmission data and were compared with the data captured in the STS record. We found 1153 (2.5%) patients who had STS records that were marked "No" or "missing," but there were billing records that showed a readmission. The reported STS readmission rate of 4796 (10.5%) underreported the readmission rate by 2.5 actual percentage points. The true rate should have been 13.0%. Actual readmission rate was 23.8% higher than reported by the clinical database. Approximately 36% of readmissions were to a hospital that was a part of a different hospital system. It is important to know accurate readmission rates for quality improvement processes and institutional financial planning. Matching patient records to an administrative database showed that the clinical database may fail to capture many readmissions. Combining data with an administrative database can enhance accuracy of reporting. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  3. Quantum chemistry by random walk: Higher accuracy

    International Nuclear Information System (INIS)

    Anderson, J.B.

    1980-01-01

    The random walk method of solving the Schroedinger equation is extended to allow the calculation of eigenvalues of atomic and molecular systems with higher accuracy. The combination of direct calculation of the difference delta between a true wave function psi and a trial wave function psi/sub o/ with importance sampling greatly reduces systematic and statistical error. The method is illustrated with calculations for ground-state hydrogen and helium atoms using trial wave functions from variational calculations. The energies obtained are 20 to 100 times more accurate than those of the corresponding variational calculations

  4. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Karaiskos, Pantelis, E-mail: pkaraisk@med.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens (Greece); Gamma Knife Department, Hygeia Hospital, Athens (Greece); Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos [Medical Physics Laboratory, Medical School, University of Athens (Greece); Roussakis, Arkadios [CT and MRI Department, Hygeia Hospital, Athens (Greece); Torrens, Michael [Gamma Knife Department, Hygeia Hospital, Athens (Greece); Seimenis, Ioannis [Medical Physics Laboratory, Medical School, Democritus University of Thrace, Alexandroupolis (Greece)

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  5. Improving the surface metrology accuracy of optical profilers by using multiple measurements

    Science.gov (United States)

    Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan

    2016-10-01

    The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.

  6. Diagnostic Accuracy: The Wellspring of EBVM Success, and How We Can Improve It

    Directory of Open Access Journals (Sweden)

    David Mills

    2017-09-01

    Full Text Available Therapy and prognosis are entailed by the diagnosis: the holistic success of the EBVM approach there­fore firmly and critically rests on diagnostic accuracy.Unfortunately, medical professionals do not appear to be very accurate with diagnoses. In human medi­cine, there is 30-50% discordance reported between doctors’ ante- (presumptive and post-mortem (defin­itive diagnoses, with no significant change in the last 100 years (Goldberg et al 2002. This is attenuated by attaching a degree of certainty – ‘very certain’ shows 16% discordance, ‘probable’ 33% and ‘uncertain’ 50% – and some body systems are more difficult (e.g. respiratory than others (Shojania et al 2002; Sing­ton and Cottrell 2002.Veterinary surgeons do not perform much better, although it is a chronically under-researched area. The single study that exists – from a respected referral institution – shows discordance between ante- and post-mortem diagnoses of ranging from 15% (oncology to 45% (ECC, with internal medicine (44%, neu­rology (35%, surgery (33% and cardiology (21% lying in between (Kent et al 2004. Incorrect diagnoses are therefore common; the potential for subsequent incorrect or harmful therapy and/or prognosis is great; the quality of interventional evidence is immaterial if the wrong disease is being treated.How can we do better? Human EBM shows that technology, big data and further evidence does not guar­antee improvement; these are unlikely realisations for EBVM in the near future in any case. The answer may lie in the fields of psychology and social science. Studies indicate that diagnostic success may rest largely with the individual: expert clinicians consistently perform better. But how?Experts are marked out by the use of ‘illness scripts’, which are mental knowledge networks against which the presenting patient – history, signs, clinical data – are checked and hypotheses entertained or refuted until (with the addition of more

  7. State Government Revenue Recovery from the Great Recession

    OpenAIRE

    James Alm; David L. Sjoquist

    2014-01-01

    The "Great Recession" lasted from December 2007 to June 2009, and it wreaked havoc on the revenues of state (and local) governments. While the U.S. economy has improved since the end of the Great Recession, state government revenues have in most cases still not completely recovered. We use various indicators to measure how different states have -- or have not -- recovered in the aftermath of the Great Recession, and we also attempt to explain why these different patterns of recovery have emer...

  8. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  9. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    Science.gov (United States)

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  10. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    Science.gov (United States)

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  11. Apparent density measurement by mercury pycnometry. Improved accuracy. Simplification of handling for possible application to irradiated samples

    International Nuclear Information System (INIS)

    Marlet, Bernard

    1978-12-01

    The accuracy of the apparent density measurement on massive samples of any geometrical shape has been improved and the method simplified. A standard deviation of +-1 to 5.10 -3 g.ml -1 according to the size and surface state of the sample, was obtained by the use of a flat ground stopper on a mercury pycnometer which fills itself under vacuum. This method saves considerable time and has been adapted to work in shielded cells for the measurement of radioactive materials, especially sintered uranium dioxide leaving the pile. The different parameters are analysed and criticized [fr

  12. The Great Recession was not so Great

    NARCIS (Netherlands)

    van Ours, J.C.

    2015-01-01

    The Great Recession is characterized by a GDP-decline that was unprecedented in the past decades. This paper discusses the implications of the Great Recession analyzing labor market data from 20 OECD countries. Comparing the Great Recession with the 1980s recession it is concluded that there is a

  13. Information transmission via movement behaviour improves decision accuracy in human groups

    NARCIS (Netherlands)

    Clément, R.J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, R.H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate

  14. Information transmission via movement behaviour improves decision accuracy in human groups

    NARCIS (Netherlands)

    Clément, Romain J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, Ralf H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate information

  15. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    Science.gov (United States)

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-09

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.

  16. The development of a sub-daily gridded rainfall product to improve hydrological predictions in Great Britain

    Science.gov (United States)

    Quinn, Niall; Freer, Jim; Coxon, Gemma; O'Loughlin, Fiachra; Woods, Ross; Liguori, Sara

    2015-04-01

    In Great Britain and many other regions of the world, flooding resulting from short duration, high intensity rainfall events can lead to significant economic losses and fatalities. At present, such extreme events are often poorly evaluated using hydrological models due, in part, to their rarity and relatively short duration and a lack of appropriate data. Such storm characteristics are not well represented by daily rainfall records currently available using volumetric gauges and/or derived gridded products. This research aims to address this important data gap by developing a sub-daily gridded precipitation product for Great Britain. Our focus is to better understand these storm events and some of the challenges and uncertainties in quantifying such data across catchment scales. Our goal is to both improve such rainfall characterisation and derive an input to drive hydrological model simulations. Our methodology involves the collation, error checking, and spatial interpolation of approximately 2000 rain gauges located across Great Britain, provided by the Scottish Environment Protection Agency (SEPA) and the Environment Agency (EA). Error checking was conducted over the entirety of the TBR data available, utilising a two stage approach. First, rain gauge data at each site were examined independently, with data exceeding reasonable thresholds marked as suspect. Second, potentially erroneous data were marked using a neighbourhood analysis approach whereby measurements at a given gauge were deemed suspect if they did not fall within defined bounds of measurements at neighbouring gauges. A total of eight error checks were conducted. To provide the user with the greatest flexibility possible, the error markers associated with each check have been recorded at every site. This approach aims to enable the user to choose which checks they deem most suitable for a particular application. The quality assured TBR dataset was then spatially interpolated to produce a national

  17. Climate variability and Great Plains agriculture

    International Nuclear Information System (INIS)

    Rosenberg, N.J.; Katz, L.A.

    1991-01-01

    The ways in which inhabitants of the Great Plains, including Indians, early settlers, and 20th century farmers, have adapted to climate changes on the Great Plains are explored. The climate of the Great Plains, because of its variability and extremes, can be very stressful to plants, animals and people. It is suggested that agriculture and society on the Great Plains have, during the last century, become less vulnerable to the stresses imposed by climate. Opinions as to the sustainability of agriculture on the Great Plains vary substantially. Lockeretz (1981) suggests that large scale, high cost technologies have stressed farmers by creating surpluses and by requiring large investments. Opie (1989) sees irrigation as a climate substitute, however he stresses that the Ogallala aquifer must inevitably become depleted. Deborah and Frank Popper (1987) believe that farming on the Plains is unsustainable, and destruction of shelterbelts, out-migration of the rural population and environmental problems will lead to total collapse. With global warming, water in the Great Plains is expected to become scarcer, and although improvements in irrigation efficiency may slow depletion of the Ogallala aquifer, ultimately the acreage under irrigation must decrease to levels that can be sustained by natural recharge and reliable surface flows. 23 refs., 2 figs

  18. The hidden KPI registration accuracy.

    Science.gov (United States)

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  19. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  20. Optical Molecular Imaging Frontiers in Oncology: The Pursuit of Accuracy and Sensitivity

    Directory of Open Access Journals (Sweden)

    Kun Wang

    2015-09-01

    Full Text Available Cutting-edge technologies in optical molecular imaging have ushered in new frontiers in cancer research, clinical translation, and medical practice, as evidenced by recent advances in optical multimodality imaging, Cerenkov luminescence imaging (CLI, and optical image-guided surgeries. New abilities allow in vivo cancer imaging with sensitivity and accuracy that are unprecedented in conventional imaging approaches. The visualization of cellular and molecular behaviors and events within tumors in living subjects is improving our deeper understanding of tumors at a systems level. These advances are being rapidly used to acquire tumor-to-tumor molecular heterogeneity, both dynamically and quantitatively, as well as to achieve more effective therapeutic interventions with the assistance of real-time imaging. In the era of molecular imaging, optical technologies hold great promise to facilitate the development of highly sensitive cancer diagnoses as well as personalized patient treatment—one of the ultimate goals of precision medicine.

  1. A method of high accuracy clock synchronization by frequency following with VCXO

    International Nuclear Information System (INIS)

    Ma Yichao; Wu Jie; Zhang Jie; Song Hongzhi; Kong Yang

    2011-01-01

    In this paper, the principle of the synchronous protocol of the IEEE1588 is analyzed, and the factors that affect the accuracy of synchronization is summarized. Through the hardware timer in a microcontroller, we give the exactly the time when a package is sent or received. So synchronization of the distributed clocks can reach 1 μs in this way. Another method to improve precision of the synchronization is to replace the traditional fixed frequency crystal of the slave device, which needs to follow up the master clock, by an adjustable VCXO. So it is possible to fine tune the frequency of the distributed clocks, and reduce the drift of clock, which shows great benefit for the clock synchronization. A test measurement shows the synchronization of distribute clocks can be better than 10 ns using this method, which is more accurate than the method realized by software. (authors)

  2. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L. Improved by Accounting for Linkage Disequilibrium

    Directory of Open Access Journals (Sweden)

    Guillaume P. Ramstein

    2016-04-01

    Full Text Available Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  3. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    Science.gov (United States)

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  4. Great Basin Factsheet Series 2016 - Information and tools to restore and conserve Great Basin ecosystems

    Science.gov (United States)

    Jeanne C. Chambers

    2016-01-01

    Land managers are responsible for developing effective strategies for conserving and restoring Great Basin ecosystems in the face of invasive species, conifer expansion, and altered fire regimes. A warming climate is magnifying the effects of these threats and adding urgency to implementation of management practices that will maintain or improve ecosystem...

  5. Diagnostic accuracy of fine needle aspiration cytology of thyroid and evaluation of discordant cases

    International Nuclear Information System (INIS)

    Sharma, Ch.

    2015-01-01

    The main role of fine needle aspiration cytology (FNAC) lies in differentiating between a malignant and benign thyroid nodule. It greatly influences the treatment decision. The current study was undertaken to evaluate the cytology–histopathology correlation and to analyze the cause of diagnostic errors with an eventual aim to improve diagnostic accuracy. Materials and Methods This is a retrospective study comparing cytology and corresponding histopathology report in 724 thyroid cases. The statistical analysis included false positive rate, false negative rate, sensitivity, specificity, positive predictive value, negative predictive value and accuracy. Results On cytological examination, 635/724 were reported as benign, 68 malignant and 21 suspicious. On histopathological examination, 626/635 cases were confirmed as benign but there were 9 discordant cases. Among the other cases histopathology diagnosis of malignancy matched in 66/68 and 11/21 cases. Diagnosis correlated in 703/724 cases (97%) [p < 0.001]. False positive and false negative rates were 1.9% and 10.5%, respectively. The sensitivity and specificity were 89.5% and 98%, respectively. The positive predictive value was 84.6% and negative predictive value was 98.6%. Accuracy of FNA was 97%. Conclusion In spite of high accuracy of FNAC in differentiating between a benign and malignant lesion, certain pitfalls should be kept in mind. The common false negative diagnoses were follicular pattern cases which constitute a ‘gray zone’, cystic papillary thyroid carcinoma (PTC) and papillary micro carcinoma. The reason for false positive diagnoses was the occurrence of nuclear features characteristic of PTC in other thyroid lesions. Awareness of pathologist regarding these pitfalls can minimize false negative/positive diagnoses

  6. Does imprint cytology improve the accuracy of transrectal prostate needle biopsy?

    Science.gov (United States)

    Sayar, Hamide; Bulut, Burak Besir; Bahar, Abdulkadir Yasir; Bahar, Mustafa Remzi; Seringec, Nurten; Resim, Sefa; Çıralık, Harun

    2015-02-01

    To evaluate the accuracy of imprint cytology of core needle biopsy specimens in the diagnosis of prostate cancer. Between December 24, 2011 and May 9, 2013, patients with an abnormal DRE and/or serum PSA level of >2.5 ng/mL underwent transrectal prostate needle biopsy. Samples with positive imprint cytology but negative initial histologic exam underwent repeat sectioning and histological examination. 1,262 transrectal prostate needle biopsy specimens were evaluated from 100 patients. Malignant imprint cytology was found in 236 specimens (18.7%), 197 (15.6%) of which were confirmed by histologic examination, giving an initial 3.1% (n = 39) rate of discrepant results by imprint cytology. Upon repeat sectioning and histologic examination of these 39 biopsy samples, 14 (1.1% of the original specimens) were then diagnosed as malignant, 3 (0.2%) as atypical small acinar proliferation (ASAP), and 5 (0.4%) as high-grade prostatic intraepithelial neoplasia (HGPIN). Overall, 964 (76.4%) specimens were negative for malignancy by imprint cytology. Seven (0.6%) specimens were benign by cytology but malignant cells were found on histological evaluation. On imprint cytology examination, nonmalignant but abnormal findings were seen in 62 specimens (4.9%). These were all due to benign processes. After reexamination, the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, false-positive rate, false-negative rate of imprint preparations were 98.1, 96.9, 98.4, 92.8, 99.3, 1.6, 3.1%, respectively. Imprint cytology is valuable tool for evaluating TRUS-guided core needle biopsy specimens from the prostate. Use of imprint cytology in combination with histopathology increases diagnostic accuracy when compared with histopathologic assessment alone. © 2014 Wiley Periodicals, Inc.

  7. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Science.gov (United States)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  8. Identifying the source of farmed escaped Atlantic salmon (Salmo salar): Bayesian clustering analysis increases accuracy of assignment

    DEFF Research Database (Denmark)

    Glover, Kevin A.; Hansen, Michael Møller; Skaala, Oystein

    2009-01-01

    44 cages located on 26 farms in the Hardangerfjord, western Norway. This fjord represents one of the major salmon farming areas in Norway, with a production of 57,000 t in 2007. Based upon genetic data from 17 microsatellite markers, significant but highly variable differentiation was observed among....... Accuracy of assignment varied greatly among the individual samples. For the Bayesian clustered data set consisting of five genetic groups, overall accuracy of self-assignment was 99%, demonstrating the effectiveness of this strategy to significantly increase accuracy of assignment, albeit at the expense...

  9. Attitude Modeling Using Kalman Filter Approach for Improving the Geometric Accuracy of Cartosat-1 Data Products

    Directory of Open Access Journals (Sweden)

    Nita H. SHAH

    2010-07-01

    Full Text Available This paper deals with the rigorous photogrammetric solution to model the uncertainty in the orientation parameters of Indian Remote Sensing Satellite IRS-P5 (Cartosat-1. Cartosat-1 is a three axis stabilized spacecraft launched into polar sun-synchronous circular orbit at an altitude of 618 km. The satellite has two panchromatic (PAN cameras with nominal resolution of ~2.5 m. The camera looking ahead is called FORE mounted with +26 deg angle and the other looking near nadir is called AFT mounted with -5 deg, in along track direction. Data Product Generation Software (DPGS system uses the rigorous photogrammetric Collinearity model in order to utilize the full system information, together with payload geometry & control points, for estimating the uncertainty in attitude parameters. The initial orbit, attitude knowledge is obtained from GPS bound orbit measurement, star tracker and gyros. The variations in satellite attitude with time are modelled using simple linear polynomial model. Also, based on this model, Kalman filter approach is studied and applied to improve the uncertainty in the orientation of spacecraft with high quality ground control points (GCPs. The sequential estimator (Kalman filter is used in an iterative process which corrects the parameters at each time of observation rather than at epoch time. Results are presented for three stereo data sets. The accuracy of model depends on the accuracy of the control points.

  10. Big Ship Data: Using vessel measurements to improve estimates of temperature and wind speed on the Great Lakes

    Science.gov (United States)

    Fries, Kevin; Kerkez, Branko

    2017-05-01

    The sheer size of many water systems challenges the ability of in situ sensor networks to resolve spatiotemporal variability of hydrologic processes. New sources of vastly distributed and mobile measurements are, however, emerging to potentially fill these observational gaps. This paper poses the question: How can nontraditional measurements, such as those made by volunteer ship captains, be used to improve hydrometeorological estimates across large surface water systems? We answer this question through the analysis of one of the largest such data sets: an unprecedented collection of one million unique measurements made by ships on the North American Great Lakes from 2006 to 2014. We introduce a flexible probabilistic framework, which can be used to integrate ship measurements, or other sets of irregular point measurements, into contiguous data sets. The performance of this framework is validated through the development of a new ship-based spatial data product of water temperature, air temperature, and wind speed across the Great Lakes. An analysis of the final data product suggests that the availability of measurements across the Great Lakes will continue to play a large role in the confidence with which these large surface water systems can be studied and modeled. We discuss how this general and flexible approach can be applied to similar data sets, and how it will be of use to those seeking to merge large collections of measurements with other sources of data, such as physical models or remotely sensed products.

  11. Improving the Forecasting Accuracy of Crude Oil Prices

    Directory of Open Access Journals (Sweden)

    Xuluo Yin

    2018-02-01

    Full Text Available Currently, oil is the key element of energy sustainability, and its prices and economy have a strong mutual influence. Modeling a good method to accurately predict oil prices over long future horizons is challenging and of great interest to investors and policymakers. This paper forecasts oil prices using many predictor variables with a new time-varying weight combination approach. In doing so, we first use five single-variable time-varying parameter models to predict crude oil prices separately. Second, every special model is assigned a time-varying weight by the new combination approach. Finally, the forecasting results of oil prices are calculated. The results show that the paper’s method is robust and performs well compared to random walk.

  12. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  13. An improved recommendation algorithm via weakening indirect linkage effect

    Science.gov (United States)

    Chen, Guang; Qiu, Tian; Shen, Xiao-Quan

    2015-07-01

    We propose an indirect-link-weakened mass diffusion method (IMD), by considering the indirect linkage and the source object heterogeneity effect in the mass diffusion (MD) recommendation method. Experimental results on the MovieLens, Netflix, and RYM datasets show that, the IMD method greatly improves both the recommendation accuracy and diversity, compared with a heterogeneity-weakened MD method (HMD), which only considers the source object heterogeneity. Moreover, the recommendation accuracy of the cold objects is also better elevated in the IMD than the HMD method. It suggests that eliminating the redundancy induced by the indirect linkages could have a prominent effect on the recommendation efficiency in the MD method. Project supported by the National Natural Science Foundation of China (Grant No. 11175079) and the Young Scientist Training Project of Jiangxi Province, China (Grant No. 20133BCB23017).

  14. Accuracy improvement of SPACE code using the optimization for CHF subroutine

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Park, Jong Eun; Ha, Sang Jun

    2010-01-01

    Typically, a subroutine to calculate the CHF (Critical Heat Flux) is loaded in code for safety analysis of nuclear power plant. CHF subroutine calculates CHF phenomenon using arbitrary condition (Temperature, pressure, flow rate, power, etc). When safety analysis for nuclear power plant is performed using major factor, CHF parameter is one of the most important factor. But the subroutines used in most codes, such as Biasi method, etc., estimate some different values from experimental data. Most CHF subroutines in the codes could predict only in their specification area, such as pressure, mass flow, void fraction, etc. Even though the most accurate CHF subroutine is used in the high quality nuclear safety analysis code, it is not assured that the valued predicted values by the subroutine are acceptable out of their application area. To overcome this hardship, various approaches to estimate the CHF have been examined during the code developing stage of SPACE. And the six sigma technique was adopted for the examination as mentioned this study. The objective of this study is to improvement of CHF prediction accuracy for nuclear power plant safety analysis code using the CHF database and Six Sigma technique. Through the study, it was concluded that the six sigma technique was useful to quantify the deviation of prediction values to experimental data and the implemented CHF prediction method in SPACE code had well-predict capabilities compared with those from other methods

  15. Great software debates

    CERN Document Server

    Davis, A

    2004-01-01

    The industry’s most outspoken and insightful critic explains how the software industry REALLY works. In Great Software Debates, Al Davis, shares what he has learned about the difference between the theory and the realities of business and encourages you to question and think about software engineering in ways that will help you succeed where others fail. In short, provocative essays, Davis fearlessly reveals the truth about process improvement, productivity, software quality, metrics, agile development, requirements documentation, modeling, software marketing and sales, empiricism, start-up financing, software research, requirements triage, software estimation, and entrepreneurship.

  16. COSMO-skymed, TerraSAR-X, and RADARSAT-2 geolocation accuracy after compensation for earth-system effects

    OpenAIRE

    Schubert, Adrian; Small, David; Jehle, Michael; Meier, Erich

    2012-01-01

    A Synthetic Aperture Radar (SAR) sensor with high geolocation accuracy greatly simplifies the task of combining multiple data takes within a common geodetic reference system or Geographic Information System (GIS), and is a critical enabler for many applications such as near-real-time disaster mapping. In this study, the geolocation accuracy was estimated using the same methodology for products from three SAR sensors: TerraSAR-X (two identical satellites), COSMO-SkyMed (four identical satellit...

  17. Predicting Great Lakes fish yields: tools and constraints

    Science.gov (United States)

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  18. Analysis of spatial distribution of land cover maps accuracy

    Science.gov (United States)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  19. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarsely sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the

  20. The design of visible system for improving the measurement accuracy of imaging points

    Science.gov (United States)

    Shan, Qiu-sha; Li, Gang; Zeng, Luan; Liu, Kai; Yan, Pei-pei; Duan, Jing; Jiang, Kai

    2018-02-01

    It has a widely applications in robot vision and 3D measurement for binocular stereoscopic measurement technology. And the measure precision is an very important factor, especially in 3D coordination measurement, high measurement accuracy is more stringent to the distortion of the optical system. In order to improving the measurement accuracy of imaging points, to reducing the distortion of the imaging points, the optical system must be satisfied the requirement of extra low distortion value less than 0.1#65285;, a transmission visible optical lens was design, which has characteristic of telecentric beam path in image space, adopted the imaging model of binocular stereo vision, and imaged the drone at the finity distance. The optical system was adopted complex double Gauss structure, and put the pupil stop on the focal plane of the latter groups, maked the system exit pupil on the infinity distance, and realized telecentric beam path in image space. The system mainly optical parameter as follows: the system spectrum rangement is visible light wave band, the optical effective length is f '=30mm, the relative aperture is 1/3, and the fields of view is 21°. The final design results show that the RMS value of the spread spots of the optical lens in the maximum fields of view is 2.3μm, which is less than one pixel(3.45μm) the distortion value is less than 0.1%, the system has the advantage of extra low distortion value and avoids the latter image distortion correction; the proposed modulation transfer function of the optical lens is 0.58(@145 lp/mm), the imaging quality of the system is closed to the diffraction limited; the system has simply structure, and can satisfies the requirements of the optical indexes. Ultimately, based on the imaging model of binocular stereo vision was achieved to measuring the drone at the finity distance.

  1. Improvement of prediction accuracy of large eddy simulation on colocated grids; Colocation koshi wo mochiita LES no keisan seido kaizen ni kansuru ichikosatsu

    Energy Technology Data Exchange (ETDEWEB)

    Inagaki, M.; Abe, K. [Toyota Central Research and Development Labs., Inc., Aichi (Japan)

    1998-07-25

    With the recent advances in computers, large eddy simulation (LES) has become applicable to engineering prediction. However, most cases of the engineering applications need to use the nonorthgonal curvilimear coordinate systems. The staggered grids, usually used in LES in the orthgonal coordinates, don`t keep conservative properties in the nonorthgonal curvilinear coordinates. On the other hand, the colocated grids can be applied in the nonorthgonal curvilinear coordinates without losing its conservative properties, although its prediction accuracy isn`t so high as the staggered grid`s in the orthgonal coordinates especially with the coarse grids. In this research, the discretization method of the colocated grids is modified to improve its prediction accuracy. Plane channel flows are simulated on four grids of different resolution using the modified colocated grids and the original colocated grids. The results show that the modified colocated grids have higher accuracy than the original colocated grids. 17 refs., 13 figs., 1 tab.

  2. Post-operative 3D CT feedback improves accuracy and precision in the learning curve of anatomic ACL femoral tunnel placement.

    Science.gov (United States)

    Sirleo, Luigi; Innocenti, Massimo; Innocenti, Matteo; Civinini, Roberto; Carulli, Christian; Matassi, Fabrizio

    2018-02-01

    To evaluate the feedback from post-operative three-dimensional computed tomography (3D-CT) on femoral tunnel placement in the learning process, to obtain an anatomic anterior cruciate ligament (ACL) reconstruction. A series of 60 consecutive patients undergoing primary ACL reconstruction using autologous hamstrings single-bundle outside-in technique were prospectively included in the study. ACL reconstructions were performed by the same trainee-surgeon during his learning phase of anatomic ACL femoral tunnel placement. A CT scan with dedicated tunnel study was performed in all patients within 48 h after surgery. The data obtained from the CT scan were processed into a three-dimensional surface model, and a true medial view of the lateral femoral condyle was used for the femoral tunnel placement analysis. Two independent examiners analysed the tunnel placements. The centre of femoral tunnel was measured using a quadrant method as described by Bernard and Hertel. The coordinates measured were compared with anatomic coordinates values described in the literature [deep-to-shallow distance (X-axis) 28.5%; high-to-low distance (Y-axis) 35.2%]. Tunnel placement was evaluated in terms of accuracy and precision. After each ACL reconstruction, results were shown to the surgeon to receive an instant feedback in order to achieve accurate correction and improve tunnel placement for the next surgery. Complications and arthroscopic time were also recorded. Results were divided into three consecutive series (1, 2, 3) of 20 patients each. A trend to placing femoral tunnel slightly shallow in deep-to-shallow distance and slightly high in high-to-low distance was observed in the first and the second series. A progressive improvement in tunnel position was recorded from the first to second series and from the second to the third series. Both accuracy (+52.4%) and precision (+55.7%) increased from the first to the third series (p process to improve accuracy and precision of femoral

  3. Evaluation of different operational strategies for lithium ion battery systems connected to a wind turbine for primary frequency regulation and wind power forecast accuracy improvement

    Energy Technology Data Exchange (ETDEWEB)

    Swierczynski, Maciej; Stroe, Daniel Ioan; Stan, Ana Irina; Teodorescu, Remus; Andreasen, Soeren Juhl [Aalborg Univ. (Denmark). Dept. of Energy Technology

    2012-07-01

    High penetration levels of variable wind energy sources can cause problems with their grid integration. Energy storage systems connected to wind turbine/wind power plants can improve predictability of the wind power production and provide ancillary services to the grid. This paper investigates economics of different operational strategies for Li-ion systems connected to wind turbines for wind power forecast accuracy improvement and primary frequency regulation. (orig.)

  4. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  5. Improving mass measurement accuracy in mass spectrometry based proteomics by combining open source tools for chromatographic alignment and internal calibration.

    Science.gov (United States)

    Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M

    2009-05-02

    Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.

  6. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    Science.gov (United States)

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-07

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Improved Optical Flow Algorithm for a Intelligent Traffic Tracking System

    Directory of Open Access Journals (Sweden)

    Xia Yupeng

    2013-05-01

    Full Text Available It is known that to get the contours and segmentations of moving cars is the essential step of image processing in intelligent traffic tracking systems. As an effective way, the optical flow algorithm is widely used for this kind of applications. But in traditional gradient-based approaches, in order to make the data responding to the edges of moving objects expand to the area, which gray level is flat, it needs to keep the iteration times large enough. It takes a large amount of calculation time, and the accuracy of the result is not as good as expected. In order to improve the numerical reliability of image gradient data, Hessian matrix distinguishing, Gaussian filtering standard deviation amending, mean model amending and multi-image comparing, the four algorithms were investigated by applying them to track moving objects. From the experimental results, it is shown that both the calculation convergence speed and accuracy of our methods have greatly improved comparing with traditional algorithms, the feasibility and validity of those methods were confirmed.

  8. Fission product model for BWR analysis with improved accuracy in high burnup

    International Nuclear Information System (INIS)

    Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira

    1998-01-01

    A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)

  9. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    Science.gov (United States)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial

  10. Improving Catastrophe Modeling for Business Interruption Insurance Needs.

    Science.gov (United States)

    Rose, Adam; Huyck, Charles K

    2016-10-01

    While catastrophe (CAT) modeling of property damage is well developed, modeling of business interruption (BI) lags far behind. One reason is the crude nature of functional relationships in CAT models that translate property damage into BI. Another is that estimating BI losses is more complicated because it depends greatly on public and private decisions during recovery with respect to resilience tactics that dampen losses by using remaining resources more efficiently to maintain business function and to recover more quickly. This article proposes a framework for improving hazard loss estimation for BI insurance needs. Improved data collection that allows for analysis at the level of individual facilities within a company can improve matching the facilities with the effectiveness of individual forms of resilience, such as accessing inventories, relocating operations, and accelerating repair, and can therefore improve estimation accuracy. We then illustrate the difference this can make in a case study example of losses from a hurricane. © 2016 Society for Risk Analysis.

  11. Investigation of CFRP in aerospace field and improvement of the molding accuracy by using autoclave

    Science.gov (United States)

    Minamisawa, Takunori

    2017-07-01

    In recent years, CFRP (Carbon Fiber Reinforced Plastic) has come to be used in a wide range of industries such as sporting goods, fishing tackle and cars because it has a large number of advantages. In this situation, even the passenger aircraft industry also pays attention to the material. CFRP is an ideal material for airplanes because it has a lot of advantages such as light weight and strong, chemical resistance and corrosion resistance. Generally, autoclave is used for molding CFRP in the field of aerospace engineering. Autoclave is a machine that can mold a product by heating and pressurizing material in an evacuated bag. What is examined in this paper is an observation on handmade CFRP by a polarizing microscope. In addition, mechanical characteristics were investigated. Furthermore, an improvement of accuracy in CFRP molding using an autoclave is suggested from viewpoint of thermodynamics.

  12. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    Science.gov (United States)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  13. A high accuracy land use/cover retrieval system

    Directory of Open Access Journals (Sweden)

    Alaa Hefnawy

    2012-03-01

    Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.

  14. Assessing community values for reducing agricultural emissions to improve water quality and protect coral health in the Great Barrier Reef

    Science.gov (United States)

    Rolfe, John; Windle, Jill

    2011-12-01

    Policymakers wanting to increase protection of the Great Barrier Reef from pollutants generated by agriculture need to identify when measures to improve water quality generate benefits to society that outweigh the costs involved. The research reported in this paper makes a contribution in several ways. First, it uses the improved science understanding about the links between management changes and reef health to bring together the analysis of costs and benefits of marginal changes, helping to demonstrate the appropriate way of addressing policy questions relating to reef protection. Second, it uses the scientific relationships to frame a choice experiment to value the benefits of improved reef health, with the results of mixed logit (random parameter) models linking improvements explicitly to changes in "water quality units." Third, the research demonstrates how protection values are consistent across a broader population, with some limited evidence of distance effects. Fourth, the information on marginal costs and benefits that are reported provide policymakers with information to help improve management decisions. The results indicate that while there is potential for water quality improvements to generate net benefits, high cost water quality improvements are generally uneconomic. A major policy implication is that cost thresholds for key pollutants should be set to avoid more expensive water quality proposals being selected.

  15. Two high accuracy digital integrators for Rogowski current transducers

    Science.gov (United States)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  16. General formula for on-axis sun-tracking system and its application in improving tracking accuracy of solar collector

    Energy Technology Data Exchange (ETDEWEB)

    Chong, K.K.; Wong, C.W. [Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Off Jalan Genting Kelang, Setapak, 53300 Kuala Lumpur (Malaysia)

    2009-03-15

    Azimuth-elevation and tilt-roll tracking mechanism are among the most commonly used sun-tracking methods for aiming the solar collector towards the sun at all times. It has been many decades that each of these two sun-tracking methods has its own specific sun-tracking formula and they are not interrelated. In this paper, the most general form of sun-tracking formula that embraces all the possible on-axis tracking methods is presented. The general sun-tracking formula not only can provide a general mathematical solution, but more significantly it can improve the sun-tracking accuracy by tackling the installation error of the solar collector. (author)

  17. Can verbal working memory training improve reading?

    Science.gov (United States)

    Banales, Erin; Kohnen, Saskia; McArthur, Genevieve

    2015-01-01

    The aim of the current study was to determine whether poor verbal working memory is associated with poor word reading accuracy because the former causes the latter, or the latter causes the former. To this end, we tested whether (a) verbal working memory training improves poor verbal working memory or poor word reading accuracy, and whether (b) reading training improves poor reading accuracy or verbal working memory in a case series of four children with poor word reading accuracy and verbal working memory. Each child completed 8 weeks of verbal working memory training and 8 weeks of reading training. Verbal working memory training improved verbal working memory in two of the four children, but did not improve their reading accuracy. Similarly, reading training improved word reading accuracy in all children, but did not improve their verbal working memory. These results suggest that the causal links between verbal working memory and reading accuracy may not be as direct as has been assumed.

  18. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    Science.gov (United States)

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Incorporation of unique molecular identifiers in TruSeq adapters improves the accuracy of quantitative sequencing.

    Science.gov (United States)

    Hong, Jungeui; Gresham, David

    2017-11-01

    Quantitative analysis of next-generation sequencing (NGS) data requires discriminating duplicate reads generated by PCR from identical molecules that are of unique origin. Typically, PCR duplicates are identified as sequence reads that align to the same genomic coordinates using reference-based alignment. However, identical molecules can be independently generated during library preparation. Misidentification of these molecules as PCR duplicates can introduce unforeseen biases during analyses. Here, we developed a cost-effective sequencing adapter design by modifying Illumina TruSeq adapters to incorporate a unique molecular identifier (UMI) while maintaining the capacity to undertake multiplexed, single-index sequencing. Incorporation of UMIs into TruSeq adapters (TrUMIseq adapters) enables identification of bona fide PCR duplicates as identically mapped reads with identical UMIs. Using TrUMIseq adapters, we show that accurate removal of PCR duplicates results in improved accuracy of both allele frequency (AF) estimation in heterogeneous populations using DNA sequencing and gene expression quantification using RNA-Seq.

  20. Measurements of Neutron and Gamma Attenuation in Massive Laminated Shields of Concrete and a Study of the Accuracy of some Methods of Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Aalto, E; Nilsson, R

    1964-09-15

    Extensive neutron and gamma attenuation measurements have been performed in magnetite and ordinary concrete up to a depth of 2 metres in order to study the accuracy attainable by some shield calculation methods. The effect of thin, heavy layers (Pb) has also been studied. Experimental facilities and instrumentation, especially the foil detection methods used for thermal and epithermal neutrons, are described in some detail. Great weight is laid upon a thorough error analysis. The fluxes measured are compared to those calculated by an earlier version of the British 18-group removal method (RASH B{sub 3}), by an improved removal method (NRN) developed at AB Atomenergi, and by numerical integration of the Boltzmann equation (NIOBE). The results show that shielding calculations with the newer methods give fluxes that are generally within a factor of 2-3 from the true values. A greater accuracy seems to be difficult to obtain in practice in spite of possible improvements in the mathematical solution of the transport problem. The greatest errors originate in the translation between the true and calculation geometries in the uncertainty of material properties in the case of concrete, and in approximations and inaccuracies of radiation sources.

  1. Measurements of Neutron and Gamma Attenuation in Massive Laminated Shields of Concrete and a Study of the Accuracy of some Methods of Calculation

    International Nuclear Information System (INIS)

    Aalto, E.; Nilsson, R.

    1964-09-01

    Extensive neutron and gamma attenuation measurements have been performed in magnetite and ordinary concrete up to a depth of 2 metres in order to study the accuracy attainable by some shield calculation methods. The effect of thin, heavy layers (Pb) has also been studied. Experimental facilities and instrumentation, especially the foil detection methods used for thermal and epithermal neutrons, are described in some detail. Great weight is laid upon a thorough error analysis. The fluxes measured are compared to those calculated by an earlier version of the British 18-group removal method (RASH B 3 ), by an improved removal method (NRN) developed at AB Atomenergi, and by numerical integration of the Boltzmann equation (NIOBE). The results show that shielding calculations with the newer methods give fluxes that are generally within a factor of 2-3 from the true values. A greater accuracy seems to be difficult to obtain in practice in spite of possible improvements in the mathematical solution of the transport problem. The greatest errors originate in the translation between the true and calculation geometries in the uncertainty of material properties in the case of concrete, and in approximations and inaccuracies of radiation sources

  2. [Navigation in implantology: Accuracy assessment regarding the literature].

    Science.gov (United States)

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  3. Financial Analysts' Forecast Accuracy : Before and After the Introduction of AIFRS

    Directory of Open Access Journals (Sweden)

    Chee Seng Cheong

    2010-09-01

    Full Text Available We examine whether financial analysts’ forecast accuracy differs between the pre- and post- adoption ofAustralian Equivalents to the International Financial Reporting Standards (AIFRS. We find that forecastaccuracy has improved after Australia adopted AIFRS. As a secondary objective, this paper also investigatesthe role of financial analysts in reducing information asymmetry in today’s Australian capital market. We findweak evidence that more analysts following a stock do not help to improve forecast accuracy by bringingmore firm-specific information to the market.

  4. Use of Flood Seasonality in Pooling-Group Formation and Quantile Estimation: An Application in Great Britain

    Science.gov (United States)

    Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth

    2018-02-01

    Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.

  5. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  6. Precipitation Dynamical Downscaling Over the Great Plains

    Science.gov (United States)

    Hu, Xiao-Ming; Xue, Ming; McPherson, Renee A.; Martin, Elinor; Rosendahl, Derek H.; Qiao, Lei

    2018-02-01

    Detailed, regional climate projections, particularly for precipitation, are critical for many applications. Accurate precipitation downscaling in the United States Great Plains remains a great challenge for most Regional Climate Models, particularly for warm months. Most previous dynamic downscaling simulations significantly underestimate warm-season precipitation in the region. This study aims to achieve a better precipitation downscaling in the Great Plains with the Weather Research and Forecast (WRF) model. To this end, WRF simulations with different physics schemes and nudging strategies are first conducted for a representative warm season. Results show that different cumulus schemes lead to more pronounced difference in simulated precipitation than other tested physics schemes. Simply choosing different physics schemes is not enough to alleviate the dry bias over the southern Great Plains, which is related to an anticyclonic circulation anomaly over the central and western parts of continental U.S. in the simulations. Spectral nudging emerges as an effective solution for alleviating the precipitation bias. Spectral nudging ensures that large and synoptic-scale circulations are faithfully reproduced while still allowing WRF to develop small-scale dynamics, thus effectively suppressing the large-scale circulation anomaly in the downscaling. As a result, a better precipitation downscaling is achieved. With the carefully validated configurations, WRF downscaling is conducted for 1980-2015. The downscaling captures well the spatial distribution of monthly climatology precipitation and the monthly/yearly variability, showing improvement over at least two previously published precipitation downscaling studies. With the improved precipitation downscaling, a better hydrological simulation over the trans-state Oologah watershed is also achieved.

  7. Diagnostic Accuracy of Clinical Methods for Detection of Diabetic Sensory Neuropathy

    International Nuclear Information System (INIS)

    Arshad, A. R.; Alvi, K. Y.

    2016-01-01

    Objective: To determine the accuracy of clinical methods for detection of sensory neuropathy as compared to biothesiometry. Study Design: Cross-sectional analytical study. Place and Duration of Study: 1 Mountain Medical Battalion, Azad Kashmir, from October 2013 to September 2014. Methodology: Patients with type 2 diabetes were enrolled by convenience sampling. Exclusion criteria included other identifiable causes of neuropathy, extensive ulceration of feet, amputated feet, those on treatment for neuropathy and unwilling patients. Average of 3 vibration perception threshold values measured with a biothesiometer on distal hallux was calculated. Ten gm monofilament was used to examine touch sensation over dorsal surfaces of great toes. Vibration sensation was checked over the tips of great toes using 128Hz tuning fork. Ankle jerks were checked bilaterally. Result: Neuropathy (vibration perception threshold > 25 volts) was present in 34 (21.12 percentage) out of 161 patients and 93 (57.76 percentage) were symptomatic. Measures of diagnostic accuracy for monofilament, tuning fork and ankle jerks were: sensitivity 41.18 percentage, 55.88 percentage and 64.71 percentage; specificity 92.91 percentage, 93.70 percentage and 80.31 percentage; positive predictive value (PPV) 60.87 percentage, 70.37 percentage and 46.81 percentage; negative predictive value (NPV) 85.51 percentage, 88.81 percentage and 89.47 percentage; and, diagnostic accuracy 81.99 percentage, 85.71 percentage and 77.02 percentage, respectively. Values for any 1 positive sign, any 2 positive signs or all 3 positive signs were: sensitivity 35.29 percentage, 14.71 percentage and 32.35 percentage; specificity 81.89 percentage, 93.70 percentage and 99.21 percentage; PPV 34.29 percentage, 38.46 percentage and 91.67 percentage; NPV 82.54 percentage, 80.41 percentage and 84.56 percentage; and, diagnostic accuracy 72.05 percentage, 77.02 percentage and 85.09 percentage, respectively. Conclusion: Clinical methods are

  8. Measuring Personality in Context: Improving Predictive Accuracy in Selection Decision Making

    OpenAIRE

    Hoffner, Rebecca Ann

    2009-01-01

    This study examines the accuracy of a context-sensitive (i.e., goal dimensions) measure of personality compared to a traditional measure of personality (NEO-PI-R) and generalized self-efficacy (GSE) to predict variance in task performance. The goal dimensions measure takes a unique perspective in the conceptualization of personality. While traditional measures differentiate within person and collapse across context (e.g., Big Five), the goal dimensions measure employs a hierarchical structure...

  9. The accuracy of the National Equine Database in relation to vector-borne disease risk modelling of horses in Great Britain.

    Science.gov (United States)

    Robin, C A; Lo Iacono, G; Gubbins, S; Wood, J L N; Newton, J R

    2013-05-01

    The National Equine Database (NED) contains information on the size and distribution of the horse population, but the data quality remains unknown. These data could assist with surveillance, research and contingency planning for equine infectious disease outbreaks. 1) To assess the extent of obsolete and missing data from NED, 2) evaluate the extent of spatial separation between horse and owner location and 3) identify relationships between spatial separation and land use. Two questionnaires were used to assess data accuracy in NED utilising local authority passport inspections and distribution of questionnaires to 11,000 horse owners. A subset of 1010 questionnaires was used to assess horse-owner geographic separation. During 2005-2010, 17,048 passports were checked through local authority inspections. Of these, 1558 passports (9.1%; 95% confidence interval [CI] 8.7-9.5%) were noncompliant, with 963 (5.6%; 95% CI 5.3-6.0%) containing inaccurate information and 595 (3.5%; 95% CI 3.2-3.8%) classified as missing. Of 1382 questionnaires completed by horse owners, 380 passports were obsolete (27.5%; 95% CI 25.2-29.9%), with 162 (11.7%; 95% CI 10.0-13.4%) being retained for deceased horses and 218 (15.8%; 95% CI 13.9-17.7%) having incorrect ownership details. Fifty-three per cent (95% CI 49.9-56.1%) of owners kept their horse(s) at home and 92% (95% CI 90.3-93.7%) of horses resided within 10 km of their owners. Data from a small sample survey suggest the majority of data on NED are accurate but a proportion of inaccuracies exist that may cause delay in locating horses and contacting owners during a disease outbreak. The probability that horses are located in the same postcode sector as the owner's home address is larger in rural areas. Appropriate adjustment for population size, horse-owner spatial separation and land usage would facilitate meaningful use of the national horse population derived from NED for risk modelling of incursions of equine diseases into Great

  10. Technique for Increasing Accuracy of Positioning System of Machine Tools

    Directory of Open Access Journals (Sweden)

    Sh. Ji

    2014-01-01

    Full Text Available The aim of research is to improve the accuracy of positioning and processing system using a technique for optimization of pressure diagrams of guides in machine tools. The machining quality is directly related to its accuracy, which characterizes an impact degree of various errors of machines. The accuracy of the positioning system is one of the most significant machining characteristics, which allow accuracy evaluation of processed parts.The literature describes that the working area of the machine layout is rather informative to characterize the effect of the positioning system on the macro-geometry of the part surfaces to be processed. To enhance the static accuracy of the studied machine, in principle, two groups of measures are possible. One of them points toward a decrease of the cutting force component, which overturns the slider moments. Another group of measures is related to the changing sizes of the guide facets, which may lead to their profile change.The study was based on mathematical modeling and optimization of the cutting zone coordinates. And we find the formula to determine the surface pressure of the guides. The selected parameters of optimization are vectors of the cutting force and values of slides and guides. Obtained results show that a technique for optimization of coordinates in the cutting zone was necessary to increase a processing accuracy.The research has established that to define the optimal coordinates of the cutting zone we have to change the sizes of slides, value and coordinates of applied forces, reaching the pressure equalization and improving the accuracy of positioning system of machine tools. In different points of the workspace a vector of forces is applied, pressure diagrams are found, which take into account the changes in the parameters of positioning system, and the pressure diagram equalization to provide the most accuracy of machine tools is achieved.

  11. Improve accuracy and sensibility in glycan structure prediction by matching glycan isotope abundance

    International Nuclear Information System (INIS)

    Xu Guang; Liu Xin; Liu Qingyan; Zhou Yanhong; Li Jianjun

    2012-01-01

    Highlights: ► A glycan isotope pattern recognition strategy for glycomics. ► A new data preprocessing procedure to detect ion peaks in a giving MS spectrum. ► A linear soft margin SVM classification for isotope pattern recognition. - Abstract: Mass Spectrometry (MS) is a powerful technique for the determination of glycan structures and is capable of providing qualitative and quantitative information. Recent development in computational method offers an opportunity to use glycan structure databases and de novo algorithms for extracting valuable information from MS or MS/MS data. However, detecting low-intensity peaks that are buried in noisy data sets is still a challenge and an algorithm for accurate prediction and annotation of glycan structures from MS data is highly desirable. The present study describes a novel algorithm for glycan structure prediction by matching glycan isotope abundance (mGIA), which takes isotope masses, abundances, and spacing into account. We constructed a comprehensive database containing 808 glycan compositions and their corresponding isotope abundance. Unlike most previously reported methods, not only did we take into count the m/z values of the peaks but also their corresponding logarithmic Euclidean distance of the calculated and detected isotope vectors. Evaluation against a linear classifier, obtained by training mGIA algorithm with datasets of three different human tissue samples from Consortium for Functional Glycomics (CFG) in association with Support Vector Machine (SVM), was proposed to improve the accuracy of automatic glycan structure annotation. In addition, an effective data preprocessing procedure, including baseline subtraction, smoothing, peak centroiding and composition matching for extracting correct isotope profiles from MS data was incorporated. The algorithm was validated by analyzing the mouse kidney MS data from CFG, resulting in the identification of 6 more glycan compositions than the previous annotation

  12. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    International Nuclear Information System (INIS)

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-01-01

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes

  13. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    Energy Technology Data Exchange (ETDEWEB)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au [Westmead Breast Cancer Institute, Westmead, New South Wales (Australia); Poulos, Ann E [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Lidcombe, New South Wales (Australia); Houssami, Nehmat [Screening and Test Evaluation Program, School of Public Health (A27), Sydney Medical School, University of Sydney, Sydney, New South Wales (Australia); Turner, Robin M [School of Public Health and Community Medicine, University of New South Wales, Sydney, New South Wales (Australia); Boyages, John [Macquarie University Cancer Institute, Macquarie University Hospital, Australian School of Advanced Medicine, Macquarie University, Sydney, New South Wales (Australia); Westmead Breast Cancer Institute, Westmead, New South Wales (Australia)

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  14. Accuracy optimization with wavelength tunability in overlay imaging technology

    Science.gov (United States)

    Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna

    2018-03-01

    As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.

  15. Spatial distribution and trends of total mercury in waters of the Great Lakes and connecting channels using an improved sampling technique

    International Nuclear Information System (INIS)

    Dove, A.; Hill, B.; Klawunn, P.; Waltho, J.; Backus, S.; McCrea, R.C.

    2012-01-01

    Environment Canada recently developed a clean method suitable for sampling trace levels of metals in surface waters. The results of sampling for total mercury in the Laurentian Great Lakes between 2003 and 2009 give a unique basin-wide perspective of concentrations of this important contaminant and represent improved knowledge of mercury in the region. Results indicate that concentrations of total mercury in the offshore regions of the lakes were within a relatively narrow range from about 0.3 to 0.8 ng/L. The highest concentrations were observed in the western basin of Lake Erie and concentrations then declined towards the east. Compared to the offshore, higher levels were observed at some nearshore locations, particularly in lakes Erie and Ontario. The longer-term temporal record of mercury in Niagara River suspended sediments indicates an approximate 30% decrease in equivalent water concentrations since 1986. - Highlights: ► Basin-wide concentrations of total mercury in Great Lakes surface waters are provided for the first time. ► A clean sampling method is described, stressing isolation of the sample from extraneous sources of contamination. ► Sub-ng/L concentrations of total mercury are observed in most Great Lakes offshore areas. ► Concentrations in the western basin of Lake Erie are consistently the highest observed in the basin. ► The longer-term record of mercury in Niagara River suspended sediments indicates an approximate 30% decrease since 1986. - A new, clean sampling method for metals is described and basin-wide measurements of total mercury are provided for Great Lakes surface waters for the first time.

  16. Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints

    Directory of Open Access Journals (Sweden)

    Naif M. Alsubaie

    2017-09-01

    Full Text Available This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS. Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers. These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS, accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution.

  17. Composition-based statistics and translated nucleotide searches: Improving the TBLASTN module of BLAST

    Directory of Open Access Journals (Sweden)

    Schäffer Alejandro A

    2006-12-01

    Full Text Available Abstract Background TBLASTN is a mode of operation for BLAST that aligns protein sequences to a nucleotide database translated in all six frames. We present the first description of the modern implementation of TBLASTN, focusing on new techniques that were used to implement composition-based statistics for translated nucleotide searches. Composition-based statistics use the composition of the sequences being aligned to generate more accurate E-values, which allows for a more accurate distinction between true and false matches. Until recently, composition-based statistics were available only for protein-protein searches. They are now available as a command line option for recent versions of TBLASTN and as an option for TBLASTN on the NCBI BLAST web server. Results We evaluate the statistical and retrieval accuracy of the E-values reported by a baseline version of TBLASTN and by two variants that use different types of composition-based statistics. To test the statistical accuracy of TBLASTN, we ran 1000 searches using scrambled proteins from the mouse genome and a database of human chromosomes. To test retrieval accuracy, we modernize and adapt to translated searches a test set previously used to evaluate the retrieval accuracy of protein-protein searches. We show that composition-based statistics greatly improve the statistical accuracy of TBLASTN, at a small cost to the retrieval accuracy. Conclusion TBLASTN is widely used, as it is common to wish to compare proteins to chromosomes or to libraries of mRNAs. Composition-based statistics improve the statistical accuracy, and therefore the reliability, of TBLASTN results. The algorithms used by TBLASTN are not widely known, and some of the most important are reported here. The data used to test TBLASTN are available for download and may be useful in other studies of translated search algorithms.

  18. Enhancing the accuracy of subcutaneous glucose sensors: a real-time deconvolution-based approach.

    Science.gov (United States)

    Guerra, Stefania; Facchinetti, Andrea; Sparacino, Giovanni; Nicolao, Giuseppe De; Cobelli, Claudio

    2012-06-01

    Minimally invasive continuous glucose monitoring (CGM) sensors can greatly help diabetes management. Most of these sensors consist of a needle electrode, placed in the subcutaneous tissue, which measures an electrical current exploiting the glucose-oxidase principle. This current is then transformed to glucose levels after calibrating the sensor on the basis of one, or more, self-monitoring blood glucose (SMBG) samples. In this study, we design and test a real-time signal-enhancement module that, cascaded to the CGM device, improves the quality of its output by a proper postprocessing of the CGM signal. In fact, CGM sensors measure glucose in the interstitium rather than in the blood compartment. We show that this distortion can be compensated by means of a regularized deconvolution procedure relying on a linear regression model that can be updated whenever a pair of suitably sampled SMBG references is collected. Tests performed both on simulated and real data demonstrate a significant accuracy improvement of the CGM signal. Simulation studies also demonstrate the robustness of the method against departures from nominal conditions, such as temporal misplacement of the SMBG samples and uncertainty in the blood-to-interstitium glucose kinetic model. Thanks to its online capabilities, the proposed signal-enhancement algorithm can be used to improve the performance of CGM-based real-time systems such as the hypo/hyper glycemic alert generators or the artificial pancreas.

  19. Enhanced systems for measuring and monitoring REDD+: Opportunities to improve the accuracy of emission factor and activity data in Indonesia

    Science.gov (United States)

    Solichin

    The importance of accurate measurement of forest biomass in Indonesia has been growing ever since climate change mitigation schemes, particularly the reduction of emissions from deforestation and forest degradation scheme (known as REDD+), were constitutionally accepted by the government of Indonesia. The need for an accurate system of historical and actual forest monitoring has also become more pronounced, as such a system would afford a better understanding of the role of forests in climate change and allow for the quantification of the impact of activities implemented to reduce greenhouse gas emissions. The aim of this study was to enhance the accuracy of estimations of carbon stocks and to monitor emissions in tropical forests. The research encompassed various scales (from trees and stands to landscape-sized scales) and a wide range of aspects, from evaluation and development of allometric equations to exploration of the potential of existing forest inventory databases and evaluation of cutting-edge technology for non-destructive sampling and accurate forest biomass mapping over large areas. In this study, I explored whether accuracy--especially regarding the identification and reduction of bias--of forest aboveground biomass (AGB) estimates in Indonesia could be improved through (1) development and refinement of allometric equations for major forest types, (2) integration of existing large forest inventory datasets, (3) assessing nondestructive sampling techniques for tree AGB measurement, and (4) landscape-scale mapping of AGB and forest cover using lidar. This thesis provides essential foundations to improve the estimation of forest AGB at tree scale through development of new AGB equations for several major forest types in Indonesia. I successfully developed new allometric equations using large datasets from various forest types that enable us to estimate tree aboveground biomass for both forest type specific and generic equations. My models outperformed

  20. Target Price Accuracy

    Directory of Open Access Journals (Sweden)

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  1. Guiding principles for the improved governance of port and shipping impacts in the Great Barrier Reef.

    Science.gov (United States)

    Grech, A; Bos, M; Brodie, J; Coles, R; Dale, A; Gilbert, R; Hamann, M; Marsh, H; Neil, K; Pressey, R L; Rasheed, M A; Sheaves, M; Smith, A

    2013-10-15

    The Great Barrier Reef (GBR) region of Queensland, Australia, encompasses a complex and diverse array of tropical marine ecosystems of global significance. The region is also a World Heritage Area and largely within one of the world's best managed marine protected areas. However, a recent World Heritage Committee report drew attention to serious governance problems associated with the management of ports and shipping. We review the impacts of ports and shipping on biodiversity in the GBR, and propose a series of guiding principles to improve the current governance arrangements. Implementing these principles will increase the capacity of decision makers to minimize the impacts of ports and shipping on biodiversity, and will provide certainty and clarity to port operators and developers. A 'business as usual' approach could lead to the GBR's inclusion on the List of World Heritage in Danger in 2014. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Influence of time of placement of investments for burnout and the type of rings being used on the casting accuracy.

    Science.gov (United States)

    Shah, Shabir A; Naqash, Talib Amin; Padmanabhan, T V; Subramanium; Lambodaran; Nazir, Shazana

    2014-03-01

    The sole objective of casting procedure is to provide a metallic duplication of missing tooth structure, with as great accuracy as possible. The ability to produce well fitting castings require strict adherence to certain fundamentals. A study was undertaken to comparatively evaluate the effect on casting accuracy by subjecting the invested wax patterns to burnout after different time intervals. The effect on casting accuracy using metal ring into a pre heated burnout furnace and using split ring was also carried. The readings obtained were tabulated and subjected to statistical analysis.

  3. Effects of cognitive training on change in accuracy in inductive reasoning ability.

    Science.gov (United States)

    Boron, Julie Blaskewicz; Turiano, Nicholas A; Willis, Sherry L; Schaie, K Warner

    2007-05-01

    We investigated cognitive training effects on accuracy and number of items attempted in inductive reasoning performance in a sample of 335 older participants (M = 72.78 years) from the Seattle Longitudinal Study. We assessed the impact of individual characteristics, including chronic disease. The reasoning training group showed significantly greater gain in accuracy and number of attempted items than did the comparison group; gain was primarily due to enhanced accuracy. Reasoning training effects involved a complex interaction of gender, prior cognitive status, and chronic disease. Women with prior decline on reasoning but no heart disease showed the greatest accuracy increase. In addition, stable reasoning-trained women with heart disease demonstrated significant accuracy gain. Comorbidity was associated with less change in accuracy. The results support the effectiveness of cognitive training on improving the accuracy of reasoning performance.

  4. AN ASSESSMENT OF CITIZEN CONTRIBUTED GROUND REFERENCE DATA FOR LAND COVER MAP ACCURACY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    G. M. Foody

    2015-08-01

    Full Text Available It is now widely accepted that an accuracy assessment should be part of a thematic mapping programme. Authoritative good or best practices for accuracy assessment have been defined but are often impractical to implement. Key reasons for this situation are linked to the ground reference data used in the accuracy assessment. Typically, it is a challenge to acquire a large sample of high quality reference cases in accordance to desired sampling designs specified as conforming to good practice and the data collected are normally to some degree imperfect limiting their value to an accuracy assessment which implicitly assumes the use of a gold standard reference. Citizen sensors have great potential to aid aspects of accuracy assessment. In particular, they may be able to act as a source of ground reference data that may, for example, reduce sample size problems but concerns with data quality remain. The relative strengths and limitations of citizen contributed data for accuracy assessment are reviewed in the context of the authoritative good practices defined for studies of land cover by remote sensing. The article will highlight some of the ways that citizen contributed data have been used in accuracy assessment as well as some of the problems that require further attention, and indicate some of the potential ways forward in the future.

  5. Diagnostic accuracy of routine blood examinations and CSF lactate level for post-neurosurgical bacterial meningitis.

    Science.gov (United States)

    Zhang, Yang; Xiao, Xiong; Zhang, Junting; Gao, Zhixian; Ji, Nan; Zhang, Liwei

    2017-06-01

    To evaluate the diagnostic accuracy of routine blood examinations and Cerebrospinal Fluid (CSF) lactate level for Post-neurosurgical Bacterial Meningitis (PBM) at a large sample-size of post-neurosurgical patients. The diagnostic accuracies of routine blood examinations and CSF lactate level to distinguish between PAM and PBM were evaluated with the values of the Area Under the Curve of the Receiver Operating Characteristic (AUC -ROC ) by retrospectively analyzing the datasets of post-neurosurgical patients in the clinical information databases. The diagnostic accuracy of routine blood examinations was relatively low (AUC -ROC CSF lactate level achieved rather high diagnostic accuracy (AUC -ROC =0.891; CI 95%, 0.852-0.922). The variables of patient age, operation duration, surgical diagnosis and postoperative days (the interval days between the neurosurgery and examinations) were shown to affect the diagnostic accuracy of these examinations. The variables were integrated with routine blood examinations and CSF lactate level by Fisher discriminant analysis to improve their diagnostic accuracy. As a result, the diagnostic accuracy of blood examinations and CSF lactate level was significantly improved with an AUC -ROC value=0.760 (CI 95%, 0.737-0.782) and 0.921 (CI 95%, 0.887-0.948) respectively. The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    Science.gov (United States)

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  7. Increased-accuracy numerical modeling of electron-optical systems with space-charge

    International Nuclear Information System (INIS)

    Sveshnikov, V.

    2011-01-01

    This paper presents a method for improving the accuracy of space-charge computation for electron-optical systems. The method proposes to divide the computational region into two parts: a near-cathode region in which analytical solutions are used and a basic one in which numerical methods compute the field distribution and trace electron ray paths. A numerical method is used for calculating the potential along the interface, which involves solving a non-linear equation. Preliminary results illustrating the improvement of accuracy and the convergence of the method for a simple test example are presented.

  8. Accuracy improvement of the laplace transformation method for determination of the bremsstrahlung spectra in clinical accelerators

    International Nuclear Information System (INIS)

    Scheithauer, M.; Schwedas, M.; Wiezorek, T.; Wendt, T.

    2003-01-01

    The present study focused on the reconstruction of the bremsstrahlung spectrum of a clinical linear accelerator from the measured transmission curve, with the aim of improving the accuracy of this method. The essence of the method was the analytic inverse Laplace transform of a parameter function fitted to the measured transmission curve. We tested known fitting functions, however they resulted in considerable fitting inaccuracy, leading to inaccuracies of the bremsstrahlung spectrum. In order to minimise the fitting errors, we employed a linear combination of n equations with 2n-1 parameters. The fitting errors are now considerably smaller. The measurement of the transmission function requires that the energy-dependent detector response is taken into account. We analysed the underlying physical context and developed a function that corrects for the energy-dependent detector response. The factors of this function were experimentally determined or calculated from tabulated values. (orig.) [de

  9. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    Science.gov (United States)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  10. Real-Time Tropospheric Product Establishment and Accuracy Assessment in China

    Science.gov (United States)

    Chen, M.; Guo, J.; Wu, J.; Song, W.; Zhang, D.

    2018-04-01

    Tropospheric delay has always been an important issue in Global Navigation Satellite System (GNSS) processing. Empirical tropospheric delay models are difficult to simulate complex and volatile atmospheric environments, resulting in poor accuracy of the empirical model and difficulty in meeting precise positioning demand. In recent years, some scholars proposed to establish real-time tropospheric product by using real-time or near-real-time GNSS observations in a small region, and achieved some good results. This paper uses real-time observing data of 210 Chinese national GNSS reference stations to estimate the tropospheric delay, and establishes ZWD grid model in the country wide. In order to analyze the influence of tropospheric grid product on wide-area real-time PPP, this paper compares the method of taking ZWD grid product as a constraint with the model correction method. The results show that the ZWD grid product estimated based on the national reference stations can improve PPP accuracy and convergence speed. The accuracy in the north (N), east (E) and up (U) direction increase by 31.8 %,15.6 % and 38.3 %, respectively. As with the convergence speed, the accuracy of U direction experiences the most improvement.

  11. GREAT: a web portal for Genome Regulatory Architecture Tools.

    Science.gov (United States)

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-07-08

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Early clinical esophageal adenocarcinoma (cT1): Utility of CT in regional nodal metastasis detection and can the clinical accuracy be improved?

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt Cuellar, Sonia L., E-mail: slbetancourt@mdanderson.org; Sabloff, Bradley, E-mail: bsabloff@mdanderson.org; Carter, Brett W., E-mail: bcarter2@mdanderson.org; Benveniste, Marcelo F., E-mail: mfbenveniste@mdanderson.org; Correa, Arlene M., E-mail: amcorrea@mdanderson.org; Maru, Dipen M., E-mail: dmaru@mdanderson.org; Ajani, Jaffer A., E-mail: jajani@mdanderson.org; Erasmus, Jeremy J., E-mail: jerasmus@mdanderson.org; Hofstetter, Wayne L., E-mail: whofstetter@mdanderson.org

    2017-03-15

    Introduction: Treatment of early esophageal cancer depends on the extent of the primary tumor and presence of regional lymph node metastasis.(RNM). Short axis diameter >10 mm is typically used to detect RNM. However, clinical determination of RNM is inaccurate and can result in inappropriate treatment. Purpose of this study is to evaluate the accuracy of a single linear measurement (short axis > 10 mm) of regional nodes on CT in predicting nodal metastasis, in patients with early esophageal cancer and whether using a mean diameter value (short axis + long axis/2) as well as nodal shape improves cN designation. Methods: CTs of 49 patients with cT1 adenocarcinoma treated with surgical resection alone were reviewed retrospectively. Regional nodes were considered positive for malignancy when round or ovoid and mean size >5 mm adjacent to the primary tumor and >7 mm when not adjacent. Results were compared with pN status after esophagectomy. Results: 18/49 patients had pN+ at resection. Using a single short axis diameter >10 mm on CT, nodal metastasis (cN) was positive in 7/49. Only 1 of these patients was pN+ at resection (sensitivity 5%, specificity 80%, accuracy 53%). Using mean size and morphologic criteria, cN was positive in 28/49. 11 of these patients were pN+ at resection (sensitivity 61%, specificity 45%, accuracy 51%). EUS with limited FNA of regional nodes resulted in 16/49 patients with pN+ being inappropriately designated as cN0. Conclusions: Evaluation of size, shape and location of regional lymph nodes on CT improves the sensitivity of cN determination compared with a short axis measurement alone in patients with cT1 esophageal cancer, although clinical utility is limited.

  13. Self-heating, gamma heating and heat loss effects on resistance temperature detector (RTD) accuracy

    International Nuclear Information System (INIS)

    Qian, T.; Hinds, H.W.; Tonner, P.

    1997-01-01

    Resistance temperature detectors (RTDs) are extensively used in CANDU nuclear power stations for measuring various process and equipment temperatures. Accuracy of measurement is an important performance parameter of RTDs and has great impact on the thermal power efficiency and safety of the plant. There are a number of factors that contribute to some extent to RTD measurement error. Self-heating, gamma heating and the heat-loss throughout conduction of the thermowell are three of these factors. The degree to which these three affect accuracy of RTDs used for the measurement of reactor inlet header temperature (RIHT) has been analyzed and is presented in this paper. (author)

  14. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  15. Improving wellbore position accuracy of horizontal wells by using a continuous inclination measurement from a near bit inclination MWD sensor

    Energy Technology Data Exchange (ETDEWEB)

    Berger, P. E.; Sele, R. [Baker Hughes INTEQ (United States)

    1998-12-31

    Wellbore position calculations are typically performed by measuring azimuth and inclination at 10 to 30 meter intervals and using interpolation techniques to determine the borehole position between survey stations. The input parameters are measured depth (MD), azimuth and inclination, where the two parameters are measured with an MWD tool. Output parameters are the geometric coordinates; true value depth (TVD), north and east. By improving the accuracy of the inclination measurement reduces the uncertainty of the calculated TVD value, resulting in increased confidence in wellbore position. Significant improvements in quality control can be achieved by using multiple sensors. This paper describes a set of quality control parameters that can be used to verify individual sensor performance and a method for calculating TVD uncertainty in horizontal wells, using a single sensor or a combination of sensors. 6 refs., 5 figs.

  16. OXBench: A benchmark for evaluation of protein multiple sequence alignment accuracy

    Directory of Open Access Journals (Sweden)

    Searle Stephen MJ

    2003-10-01

    Full Text Available Abstract Background The alignment of two or more protein sequences provides a powerful guide in the prediction of the protein structure and in identifying key functional residues, however, the utility of any prediction is completely dependent on the accuracy of the alignment. In this paper we describe a suite of reference alignments derived from the comparison of protein three-dimensional structures together with evaluation measures and software that allow automatically generated alignments to be benchmarked. We test the OXBench benchmark suite on alignments generated by the AMPS multiple alignment method, then apply the suite to compare eight different multiple alignment algorithms. The benchmark shows the current state-of-the art for alignment accuracy and provides a baseline against which new alignment algorithms may be judged. Results The simple hierarchical multiple alignment algorithm, AMPS, performed as well as or better than more modern methods such as CLUSTALW once the PAM250 pair-score matrix was replaced by a BLOSUM series matrix. AMPS gave an accuracy in Structurally Conserved Regions (SCRs of 89.9% over a set of 672 alignments. The T-COFFEE method on a data set of families with http://www.compbio.dundee.ac.uk. Conclusions The OXBench suite of reference alignments, evaluation software and results database provide a convenient method to assess progress in sequence alignment techniques. Evaluation measures that were dependent on comparison to a reference alignment were found to give good discrimination between methods. The STAMP Sc Score which is independent of a reference alignment also gave good discrimination. Application of OXBench in this paper shows that with the exception of T-COFFEE, the majority of the improvement in alignment accuracy seen since 1985 stems from improved pair-score matrices rather than algorithmic refinements. The maximum theoretical alignment accuracy obtained by pooling results over all methods was 94

  17. Decision aids for improved accuracy and standardization of mammographic diagnosis

    International Nuclear Information System (INIS)

    D'Orsi, C.J.; Getty, D.J.; Swets, J.A.; Pickett, R.M.; Seltzer, S.E.; McNeil, B.J.

    1990-01-01

    This paper examines the gains in the accuracy of mammographic diagnosis of breast cancer achievable from a pair of decision aids. Twenty-three potentially relevant perceptual features of mammograms were identified through interviews, psychometric tests, and consensus meetings with mammography specialists. Statistical analyses determined the 12 independent features that were most information diagnostically and assigned a weight to each according to its importance. Two decision aids were developed: a checklist that solicits a scale value from the radiologist for each feature and a computer program that merges those values optimally in an advisory estimate of the probability of malignancy. Six radiologists read a set of 150 cases, first in their usual way and later with the aids

  18. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  19. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  20. The Accuracy of Plain Radiography in Detection of Traumatic Intrathoracic Injuries

    Directory of Open Access Journals (Sweden)

    Maryam Abedi Khorasgani

    2016-08-01

    Full Text Available Introduction: Rapid diagnosis of traumatic intrathoracic injuries leads to improvement in patient management. This study was designed to evaluate the diagnostic value of chest radiography (CXR in comparison to chest computed tomography (CT scan in diagnosis of traumatic intrathoracic injuries. Methods: Participants of this prospective diagnostic accuracy study included multiple trauma patients over 15 years old with stable vital admitted to emergency department (ED during one year. The correlation of CXR and CT scan findings in diagnosis of traumatic intrathoracic injuries was evaluated using SPSS 20. Screening characteristics of CXR were calculated with 95% CI. Results: 353 patients with the mean age of 35.2 ± 15.8 were evaluated (78.8% male. Age 16-30 years with 121 (34.2%, motorcycle riders with 104 (29.5% cases and ISS < 12 with 185 (52.4% had the highest frequency among patients. Generally, screening performance characteristics of chest in diagnosis of chest traumatic injuries were as follows: sensitivity 50.3 (95% CI: 44.8 – 55.5, specificity 98.9 (95% CI: 99.5 – 99.8, PPV 97.8 (95% CI: 91.5 – 99.6, NPV 66.4 (95% CI: 60.2 – 72.03, PLR 44.5 (95% CI: 11.3 175.3, and NLR 0.5 (95% CI: 0.4 – 0.6. Accuracy of CXR in diagnosis of traumatic intrathoracic injuries was 74.5 (95% CI: 69.6 – 78.9 and its area under the ROC curve was 74.6 (95% CI: 69.3 – 79.8. Conclusion: The screening performance characteristics of CXR in diagnosis of traumatic intrathoracic injuries were higher than 90% in all pathologies except pneumothorax (50.3%. It seems that this matter has a great impact on the general screening characteristics of the test (74.3% accuracy and 50.3%sensitivity. It seems that, plain CXR should be used as an initial screening tool more carefully.

  1. MEASURING ACCURACY AND COMPLEXITY OF AN L2 LEARNER’S ORAL PRODUCTION

    Directory of Open Access Journals (Sweden)

    Teguh Khaerudin

    2015-03-01

    Full Text Available This paper aims at examining the influence of different tasks on the degree of task performance in a second language learner’s oral production. The underlying assumption is that among the three aspects of language performance in L2, i.e. fluency, accuracy, and complexity, learners may prioritize only one of them (Ellis & Barkhuizen, 2005, p. 150 and that their decision to prioritize one particular area of language performance may be determined by the characteristics of the task given to the learners (Skehan & Foster, 1997. Having a written record of an oral production, the writer focuses this study on determining the degree of complexity and accuracy, and analyzing whether the different tasks change the level of learner’s oral performance. The results show that learner’s accuracy from both tasks remains in the same level. However, both task conditions, which do not allow speech plan, result in no improvement in accuracy level and a minor improvement in the complexity level.

  2. Improved quantification accuracy for duplex real-time PCR detection of genetically modified soybean and maize in heat processed foods

    Directory of Open Access Journals (Sweden)

    CHENG Fang

    2013-04-01

    Full Text Available Real-time PCR technique has been widely used in quantitative GMO detection in recent years.The accuracy of GMOs quantification based on the real-time PCR methods is still a difficult problem,especially for the quantification of high processed samples.To develop the suitable and accurate real-time PCR system for high processed GM samples,we made ameliorations to several real-time PCR parameters,including re-designed shorter target DNA fragment,similar lengths of amplified endogenous and exogenous gene targets,similar GC contents and melting temperatures of PCR primers and TaqMan probes.Also,one Heat-Treatment Processing Model (HTPM was established using soybean flour samples containing GM soybean GTS 40-3-2 to validate the effectiveness of the improved real-time PCR system.Tested results showed that the quantitative bias of GM content in heat processed samples were lowered using the new PCR system.The improved duplex real-time PCR was further validated using processed foods derived from GM soybean,and more accurate GM content values in these foods was also achieved.These results demonstrated that the improved duplex real-time PCR would be quite suitable in quantitative detection of high processed food products.

  3. Visual control improves the accuracy of hand positioning in Huntington’s disease

    Directory of Open Access Journals (Sweden)

    Emilia J. Sitek

    2017-08-01

    Full Text Available Background: The study aimed at demonstrating dependence of visual feedback during hand and finger positioning task performance among Huntington’s disease patients in comparison to patients with Parkinson’s disease and cervical dystonia. Material and methods: Eighty-nine patients participated in the study (23 with Huntington’s disease, 25 with Parkinson’s disease with dyskinesias, 21 with Parkinson’s disease without dyskinesias, and 20 with cervical dystonia, scoring ≥20 points on Mini-Mental State Examination in order to assure comprehension of task instructions. Neurological examination comprised of the motor section from the Unified Huntington’s Disease Rating Scale for Huntington’s disease, the Unified Parkinson’s Disease Rating Scale Part II–IV for Parkinson’s disease and the Toronto Western Spasmodic Torticollis Rating Scale for cervical dystonia. In order to compare hand position accuracy under visually controlled and blindfolded conditions, the patient imitated each of the 10 examiner’s hand postures twice, once under the visual control condition and once with no visual feedback provided. Results: Huntington’s disease patients imitated examiner’s hand positions less accurately under blindfolded condition in comparison to Parkinson’s disease without dyskinesias and cervical dystonia participants. Under visually controlled condition there were no significant inter-group differences. Conclusions: Huntington’s disease patients exhibit higher dependence on visual feedback while performing motor tasks than Parkinson’s disease and cervical dystonia patients. Possible improvement of movement precision in Huntington’s disease with the use of visual cues could be potentially useful in the patients’ rehabilitation.

  4. Accuracy Improvement of the Method of Multiple Scales for Nonlinear Vibration Analyses of Continuous Systems with Quadratic and Cubic Nonlinearities

    Directory of Open Access Journals (Sweden)

    Akira Abe

    2010-01-01

    and are the driving and natural frequencies, respectively. The application of Galerkin's procedure to the equation of motion yields nonlinear ordinary differential equations with quadratic and cubic nonlinear terms. The steady-state responses are obtained by using the discretization approach of the MMS in which the definition of the detuning parameter, expressing the relationship between the natural frequency and the driving frequency, is changed in an attempt to improve the accuracy of the solutions. The validity of the solutions is discussed by comparing them with solutions of the direct approach of the MMS and the finite difference method.

  5. Diagnostic accuracy of routine blood examinations and CSF lactate level for post-neurosurgical bacterial meningitis

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2017-06-01

    Conclusions: The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level.

  6. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  7. Evaluation of solitary pulmonary nodules by integrated PET/CT: improved accuracy by FDG uptake pattern and CT findings

    International Nuclear Information System (INIS)

    Joon Young Choi; Kyung Soo Lee; O Jung Kwon; Young Mog Shim; Kyung-Han Lee; Yong Choi; Yearn Seong Choe; Byung-Tae Kim

    2004-01-01

    Objective: FDG PET is useful to differentiate malignancy from benign lesions in the evaluation of solitary pulmonary nodules (SPNs). However, FDG PET showed false positive results in benign inflammatory lesions such as tuberculosis and organizing pneumonia. Furthermore, malignant tumors such as adenocarcinoma (AC) with bronchioloalveolar carcinoma (BAC) type had lower FDG uptake than other cell types of non-small cell lung cancer. We investigated whether FDG uptake pattern and image findings of CT for attenuation correction could improve accuracy for evaluating SPNs over SUV in integrated PET/CT imaging using FDG. Methods: Forty patients (M:F = 23:17, mean age 58.2±9.4 yrs) with non-calcified SPNs (diameter on CT 30 mm, no significant mediastinal node enlargement, no atelectasis) were included. All subjects underwent integrated PET/CT imaging using FDG. One nuclear medicine physician and 1 chest radiologist interpreted the PET and non-contrast CT images for attenuation correction, respectively. On PET images, maximum SUV of SPN was acquired, and FDG uptake pattern was categorized as diffusely increased or heterogeneously increased with upper threshold of window setting adjusted to maximum SUV of each nodule. A radiologist interpreted SPNs as benign or malignant based on CT images with lung and mediastinai window settings blinded to PET findings. Results: On pathological exam, 30 SPNs were confirmed to be malignant (11 AC with non-BAC type, 8 AC with BAC type, 8 squamous cell carcinoma, 1 adenosquamous cell carcinoma, 1 neuroendocrine carcinoma, 1 large cell carcinoma), and 10 were benign (4 tuberculosis, 3 organizing pneumonia, 2 sclerosing pneumocytoma, 1 non-specific inflammation). All 5 nodules with max SUV 7.0 except one with tuberculoma had malignancy. When only nodules with diffusely increased uptake were considered malignant in indeterminate group with max SUV of 4.0 to 7.0, PET could diagnose 5 of 9 malignant nodules with one false positive nodule. In 6 of

  8. Native plant development and restoration program for the Great Basin, USA

    Science.gov (United States)

    N. L. Shaw; M. Pellant; P. Olweli; S. L. Jensen; E. D. McArthur

    2008-01-01

    The Great Basin Native Plant Selection and Increase Project, organized by the USDA Bureau of Land Management, Great Basin Restoration Initiative and the USDA Forest Service, Rocky Mountain Research Station in 2000 as a multi-agency collaborative program (http://www.fs.fed.us/rm/boise/research/shrub/greatbasin.shtml), has the objective of improving the availability of...

  9. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative

    NARCIS (Netherlands)

    Bossuyt, Patrick M.; Reitsma, Johannes B.; Bruns, David E.; Gatsonis, Constantine A.; Glasziou, Paul P.; Irwig, Les M.; Lijmer, Jeroen G.; Moher, David; Rennie, Drummond; de Vet, Henrica C. W.

    2003-01-01

    Background: To comprehend the results of diagnostic accuracy studies, readers must understand the design, conduct, analysis, and results of such studies. That goal can be achieved only through complete transparency from authors. Objective: To improve the accuracy and completeness of reporting of

  10. High-accuracy user identification using EEG biometrics.

    Science.gov (United States)

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  11. Teaching accuracy and reliability for student projects

    Science.gov (United States)

    Fisher, Nick

    2002-09-01

    Physics students at Rugby School follow the Salters Horners A-level course, which involves working on a two-week practical project of their own choosing. Pupils often misunderstand the concepts of accuracy and reliability, believing, for example, that repeating readings makes them more accurate and more reliable, whereas all it does is help to check repeatability. The course emphasizes the ideas of checking anomalous points, improving accuracy and making readings more sensitive. This article describes how we teach pupils in preparation for their projects. Based on many years of running such projects, much of this material is from a short booklet that we give out to pupils, when we train them in practical project skills.

  12. An Analysis of the Influence of Fundamental Values' Estimation Accuracy on Financial Markets

    Directory of Open Access Journals (Sweden)

    Hiroshi Takahashi

    2010-01-01

    Full Text Available This research analyzed the influence of the differences in the forecast accuracy of fundamental values on the financial market. As a result of intensive experiments in the market, we made the following interesting findings: (1 improvements in forecast accuracy of fundamentalists can contribute to an increase in the number of fundamentalists; (2 certain situations might occur, according to the level of forecast accuracy of fundamentalists, in which fundamentalists and passive management coexist, or in which fundamentalists die out of the market, and furthermore; (3 where a variety of investors exist in the market, improvements in the forecast accuracy could increase the number of fundamentalists more than the number of investors that employ passive investment strategy. These results contribute to clarifying the mechanism of price fluctuations in financial markets and also indicate one of the factors for the low ratio of passive investors in asset management business.

  13. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.

    2018-01-04

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  14. Small angle X-ray scattering and cross-linking for data assisted protein structure prediction in CASP 12 with prospects for improved accuracy

    KAUST Repository

    Ogorzalek, Tadeusz L.; Hura, Greg L.; Belsom, Adam; Burnett, Kathryn H.; Kryshtafovych, Andriy; Tainer, John A.; Rappsilber, Juri; Tsutakawa, Susan E.; Fidelis, Krzysztof

    2018-01-01

    Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As HT, solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. This article is protected by copyright. All rights reserved.

  15. Great Apes

    Science.gov (United States)

    Sleeman, Jonathan M.; Cerveny, Shannon

    2014-01-01

    Anesthesia of great apes is often necessary to conduct diagnostic analysis, provide therapeutics, facilitate surgical procedures, and enable transport and translocation for conservation purposes. Due to the stress of remote delivery injection of anesthetic agents, recent studies have focused on oral delivery and/or transmucosal absorption of preanesthetic and anesthetic agents. Maintenance of the airway and provision of oxygen is an important aspect of anesthesia in great ape species. The provision of analgesia is an important aspect of the anesthesia protocol for any procedure involving painful stimuli. Opioids and nonsteroidal anti-inflammatory drugs (NSAIDs) are often administered alone, or in combination to provide multi-modal analgesia. There is increasing conservation management of in situ great ape populations, which has resulted in the development of field anesthesia techniques for free-living great apes for the purposes of translocation, reintroduction into the wild, and clinical interventions.

  16. Improve 3D laser scanner measurements accuracy using a FFBP neural network with Widrow-Hoff weight/bias learning function

    Science.gov (United States)

    Rodríguez-Quiñonez, J. C.; Sergiyenko, O.; Hernandez-Balbuena, D.; Rivas-Lopez, M.; Flores-Fuentes, W.; Basaca-Preciado, L. C.

    2014-12-01

    Many laser scanners depend on their mechanical construction to guarantee their measurements accuracy, however, the current computational technologies allow us to improve these measurements by mathematical methods implemented in neural networks. In this article we are going to introduce the current laser scanner technologies, give a description of our 3D laser scanner and adjust their measurement error by a previously trained feed forward back propagation (FFBP) neural network with a Widrow-Hoff weight/bias learning function. A comparative analysis with other learning functions such as the Kohonen algorithm and gradient descendent with momentum algorithm is presented. Finally, computational simulations are conducted to verify the performance and method uncertainty in the proposed system.

  17. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  18. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Science.gov (United States)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  19. Toxic fables: the advertising and marketing of agricultural chemicals in the great plains, 1945-1985.

    Science.gov (United States)

    Vail, David D

    2012-12-01

    This paper examines how pesticides and their technologies were sold to farmers and pilots throughout the midtwentieth century. It principally considers how marketing rhetoric and advertisement strategies used by chemical companies and aerial spraying firms influenced the practices and perspectives of farm producers in the Great Plains. In order to convince landowners and agricultural leaders to buy their pesticides, chemical companies generated advertisements that championed local crop health, mixture accuracy, livestock safety and a chemical-farming 'way of life' that kept fields healthy and productive. Combining notions of safety, accuracy and professionalism with pest eradication messages reinforced the standards that landowners, pilots and agriculturalists would hold regarding toxicity and risk when spraying their fields. As the politics of health changed in the aftermath of Rachel Carson's Silent Spring, these companies and aerial spraying outfits responded by keeping to a vision of agricultural health that required poisons for protection through technological accuracy. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Accuracy in Optical Information Processing

    Science.gov (United States)

    Timucin, Dogan Aslan

    Low computational accuracy is an important obstacle for optical processors which blocks their way to becoming a practical reality and a serious challenger for classical computing paradigms. This research presents a comprehensive solution approach to the problem of accuracy enhancement in discrete analog optical information processing systems. Statistical analysis of a generic three-plane optical processor is carried out first, taking into account the effects of diffraction, interchannel crosstalk, and background radiation. Noise sources included in the analysis are photon, excitation, and emission fluctuations in the source array, transmission and polarization fluctuations in the modulator, and photoelectron, gain, dark, shot, and thermal noise in the detector array. Means and mutual coherence and probability density functions are derived for both optical and electrical output signals. Next, statistical models for a number of popular optoelectronic devices are studied. Specific devices considered here are light-emitting and laser diode sources, an ideal noiseless modulator and a Gaussian random-amplitude-transmittance modulator, p-i-n and avalanche photodiode detectors followed by electronic postprocessing, and ideal free-space geometrical -optics propagation and single-lens imaging systems. Output signal statistics are determined for various interesting device combinations by inserting these models into the general formalism. Finally, based on these special-case output statistics, results on accuracy limitations and enhancement in optical processors are presented. Here, starting with the formulation of the accuracy enhancement problem as (1) an optimal detection problem and (2) as a parameter estimation problem, the potential accuracy improvements achievable via the classical multiple-hypothesis -testing and maximum likelihood and Bayesian parameter estimation methods are demonstrated. Merits of using proper normalizing transforms which can potentially stabilize

  1. Use of Low-Level Sensor Data to Improve the Accuracy of Bluetooth-Based Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Christensen, Lars Tørholm; Krishnan, Rajesh

    2013-01-01

    by a single device. The latter situation could lead to location ambiguity and could reduce the accuracy of travel time estimation. Therefore, the accuracy of travel time estimation by Bluetooth technology depends on how location ambiguity is handled by the estimation method. The issue of multiple detection...... events in the context of travel time estimation by Bluetooth technology has been considered by various researchers. However, treatment of this issue has been simplistic. Most previous studies have used the first detection event (enter-enter) as the best estimate. No systematic analysis has been conducted...... to explore the most accurate method of travel time estimation with multiple detection events. In this study, different aspects of the Bluetooth detection zone, including size and impact on the accuracy of travel time estimation, were discussed. Four methods were applied to estimate travel time: enter...

  2. Reassessment of CT images to improve diagnostic accuracy in patients with suspected acute appendicitis and an equivocal preoperative CT interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Cheol; Yang, Dal Mo; Kim, Sang Won [Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Department of Radiology, Seoul (Korea, Republic of); Park, Seong Jin [Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Department of Radiology, Seoul (Korea, Republic of)

    2012-06-15

    To identify CT features that discriminate individuals with and without acute appendicitis in patients with equivocal CT findings, and to assess whether knowledge of these findings improves diagnostic accuracy. 53 patients that underwent appendectomy with an indeterminate preoperative CT interpretation were selected and allocated to an acute appendicitis group or a non-appendicitis group. The 53 CT examinations were reviewed by two radiologists in consensus to identify CT findings that could aid in the discrimination of those with and without appendicitis. In addition, two additional radiologists were then requested to evaluate independently the 53 CT examinations using a 4-point scale, both before and after being informed of the potentially discriminating criteria. CT findings found to be significantly different in the two groups were; the presence of appendiceal wall enhancement, intraluminal air in appendix, a coexistent inflammatory lesion, and appendiceal wall thickening (P < 0.05). Areas under the curves of reviewers 1 and 2 significantly increased from 0.516 and 0.706 to 0.677 and 0.841, respectively, when reviewers were told which CT variables were significant (P = 0.0193 and P = 0.0397, respectively). Knowledge of the identified CT findings was found to improve diagnostic accuracy for acute appendicitis in patients with equivocal CT findings. circle Numerous patients with clinically equivocal appendicitis do not have acute appendicitis circle Computed tomography (CT) helps to reduce the negative appendectomy rate circle CT is not always infallible and may also demonstrate indeterminate findings circle However knowledge of significant CT variables can further reduce negative appendectomy rate circle An equivocal CT interpretation of appendicitis should be reassessed with this knowledge. (orig.)

  3. Reassessment of CT images to improve diagnostic accuracy in patients with suspected acute appendicitis and an equivocal preoperative CT interpretation

    International Nuclear Information System (INIS)

    Kim, Hyun Cheol; Yang, Dal Mo; Kim, Sang Won; Park, Seong Jin

    2012-01-01

    To identify CT features that discriminate individuals with and without acute appendicitis in patients with equivocal CT findings, and to assess whether knowledge of these findings improves diagnostic accuracy. 53 patients that underwent appendectomy with an indeterminate preoperative CT interpretation were selected and allocated to an acute appendicitis group or a non-appendicitis group. The 53 CT examinations were reviewed by two radiologists in consensus to identify CT findings that could aid in the discrimination of those with and without appendicitis. In addition, two additional radiologists were then requested to evaluate independently the 53 CT examinations using a 4-point scale, both before and after being informed of the potentially discriminating criteria. CT findings found to be significantly different in the two groups were; the presence of appendiceal wall enhancement, intraluminal air in appendix, a coexistent inflammatory lesion, and appendiceal wall thickening (P < 0.05). Areas under the curves of reviewers 1 and 2 significantly increased from 0.516 and 0.706 to 0.677 and 0.841, respectively, when reviewers were told which CT variables were significant (P = 0.0193 and P = 0.0397, respectively). Knowledge of the identified CT findings was found to improve diagnostic accuracy for acute appendicitis in patients with equivocal CT findings. circle Numerous patients with clinically equivocal appendicitis do not have acute appendicitis circle Computed tomography (CT) helps to reduce the negative appendectomy rate circle CT is not always infallible and may also demonstrate indeterminate findings circle However knowledge of significant CT variables can further reduce negative appendectomy rate circle An equivocal CT interpretation of appendicitis should be reassessed with this knowledge. (orig.)

  4. EVALUATION OF RELATIVE GEOMETRIC ACCURACY OF TERRASAR-X BY PIXEL MATCHING METHODOLOGY

    Directory of Open Access Journals (Sweden)

    T. Nonaka

    2016-06-01

    Full Text Available Recently, high-resolution commercial SAR satellites with several meters of resolutions are widely utilized for various applications and disaster monitoring is one of the commonly applied areas. The information about the flooding situation and ground displacement was rapidly announced to the public after the Great East Japan Earthquake 2011. One of the studies reported the displacement in Tohoku region by the pixel matching methodology using both pre- and post- event TerraSAR-X data, and the validated accuracy was about 30 cm at the GEONET reference points. In order to discuss the spatial distribution of the displacement, we need to evaluate the relative accuracy of the displacement in addition to the absolute accuracy. In the previous studies, our study team evaluated the absolute 2D geo-location accuracy of the TerraSAR-X ortho-rectified EEC product for both flat and mountain areas. Therefore, the purpose of the current study was to evaluate the spatial and temporal relative geo-location accuracies of the product by considering the displacement of the fixed point as the relative geo-location accuracy. Firstly, by utilizing TerraSAR-X StripMap dataset, the pixel matching method for estimating the displacement with sub-pixel level was developed. Secondly, the validity of the method was confirmed by comparing with GEONET data. We confirmed that the accuracy of the displacement for X and Y direction was in agreement with the previous studies. Subsequently, the methodology was applied to 20 pairs of data set for areas of Tokyo Ota-ku and Kawasaki-shi, and the displacement of each pair was evaluated. It was revealed that the time series displacement rate had the seasonal trend and seemed to be related to atmospheric delay.

  5. Evaluation of Relative Geometric Accuracy of Terrasar-X by Pixel Matching Methodology

    Science.gov (United States)

    Nonaka, T.; Asaka, T.; Iwashita, K.

    2016-06-01

    Recently, high-resolution commercial SAR satellites with several meters of resolutions are widely utilized for various applications and disaster monitoring is one of the commonly applied areas. The information about the flooding situation and ground displacement was rapidly announced to the public after the Great East Japan Earthquake 2011. One of the studies reported the displacement in Tohoku region by the pixel matching methodology using both pre- and post- event TerraSAR-X data, and the validated accuracy was about 30 cm at the GEONET reference points. In order to discuss the spatial distribution of the displacement, we need to evaluate the relative accuracy of the displacement in addition to the absolute accuracy. In the previous studies, our study team evaluated the absolute 2D geo-location accuracy of the TerraSAR-X ortho-rectified EEC product for both flat and mountain areas. Therefore, the purpose of the current study was to evaluate the spatial and temporal relative geo-location accuracies of the product by considering the displacement of the fixed point as the relative geo-location accuracy. Firstly, by utilizing TerraSAR-X StripMap dataset, the pixel matching method for estimating the displacement with sub-pixel level was developed. Secondly, the validity of the method was confirmed by comparing with GEONET data. We confirmed that the accuracy of the displacement for X and Y direction was in agreement with the previous studies. Subsequently, the methodology was applied to 20 pairs of data set for areas of Tokyo Ota-ku and Kawasaki-shi, and the displacement of each pair was evaluated. It was revealed that the time series displacement rate had the seasonal trend and seemed to be related to atmospheric delay.

  6. Method of control of machining accuracy of low-rigidity elastic-deformable shafts

    Directory of Open Access Journals (Sweden)

    Antoni Świć

    Full Text Available The paper presents an analysis of the possibility of increasing the accuracy and stability of machining of low-rigidity shafts while ensuring high efficiency and economy of their machining. An effective way of improving the accuracy of machining of shafts is increasing their rigidity as a result of oriented change of the elastic-deformable state through the application of a tensile force which, combined with the machining force, forms longitudinal-lateral strains. The paper also presents mathematical models describing the changes of the elastic-deformable state resulting from the application of the tensile force. It presents the results of experimental studies on the deformation of elastic low-rigidity shafts, performed on a special test stand developed on the basis of a lathe. An estimation was made of the effectiveness of the method of control of the elastic-deformable state with the use, as the regulating effects, the tensile force and eccentricity. It was demonstrated that controlling the two parameters: tensile force and eccentricity, one can improve the accuracy of machining, and thus achieve a theoretically assumed level of accuracy.

  7. Phenomenological reports diagnose accuracy of eyewitness identification decisions.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; McKinnon, Anna C; Weber, Nathan

    2010-02-01

    This study investigated whether measuring the phenomenology of eyewitness identification decisions aids evaluation of their accuracy. Witnesses (N=502) viewed a simulated crime and attempted to identify two targets from lineups. A divided attention manipulation during encoding reduced the rate of remember (R) correct identifications, but not the rates of R foil identifications or know (K) judgments in the absence of recollection (i.e., K/[1-R]). Both RK judgments and recollection ratings (a novel measure of graded recollection) distinguished correct from incorrect positive identifications. However, only recollection ratings improved accuracy evaluation after identification confidence was taken into account. These results provide evidence that RK judgments for identification decisions function in a similar way as for recognition decisions; are consistent with the notion of graded recollection; and indicate that measures of phenomenology can enhance the evaluation of identification accuracy. Copyright 2009 Elsevier B.V. All rights reserved.

  8. Anesthesia Recordkeeping: Accuracy of Recall with Computerized and Manual Entry Recordkeeping

    Science.gov (United States)

    Davis, Thomas Corey

    2011-01-01

    Introduction: Anesthesia information management systems are rapidly gaining widespread acceptance. Aggressively promoted as an improvement to manual-entry recordkeeping systems in the areas of accuracy, quality improvement, billing and vigilance, these systems record all patient vital signs and parameters, providing a legible hard copy and…

  9. An Analysis of the Influence of Fundamental Values' Estimation Accuracy on Financial Markets

    OpenAIRE

    Takahashi, Hiroshi

    2010-01-01

    This research analyzed the influence of the differences in the forecast accuracy of fundamental values on the financial market. As a result of intensive experiments in the market, we made the following interesting findings: (1) improvements in forecast accuracy of fundamentalists can contribute to an increase in the number of fundamentalists; (2) certain situations might occur, according to the level of forecast accuracy of fundamentalists, in which fundamentalists and passive management coex...

  10. Measurement system with high accuracy for laser beam quality.

    Science.gov (United States)

    Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming

    2015-05-20

    Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.

  11. IMPROVEMENT OF ACCURACY OF RADIATIVE HEAT TRANSFER DIFFERENTIAL APPROXIMATION METHOD FOR MULTI DIMENSIONAL SYSTEMS BY MEANS OF AUTO-ADAPTABLE BOUNDARY CONDITIONS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The  efficiency  of  the  named  method  is  proved  for  the  test  2D-systems.  Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.

  12. ACCURACY ASSESSMENT OF COASTAL TOPOGRAPHY DERIVED FROM UAV IMAGES

    Directory of Open Access Journals (Sweden)

    N. Long

    2016-06-01

    Full Text Available To monitor coastal environments, Unmanned Aerial Vehicle (UAV is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR or Terrestrial Laser Scanning (TLS, this solution produces Digital Surface Model (DSM with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm, a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs and the influence of spatial image resolution (4.6 cm vs 2 cm. The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm. The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm; the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  13. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  14. Accuracy of stereolithographic models of human anatomy

    International Nuclear Information System (INIS)

    Barker, T.M.; Earwaker, W.J.S.; Lisle, D.A.

    1994-01-01

    A study was undertaken to determine the dimensional accuracy of anatomical replicas derived from X-ray 3D computed tomography (CT) images and produced using the rapid prototyping technique of stereolithography (SLA). A dry bone skull and geometric phantom were scanned, and replicas were produced. Distance measurements were obtained to compare the original objects and the resulting replicas. Repeated measurements between anatomical landmarks were used for comparison of the original skull and replica. Results for the geometric phantom demonstrate a mean difference of +0.47mm, representing an accuracy of 97.7-99.12%. Measurements of the skull produced a range of absolute differences (maximum +4.62mm, minimum +0.1mm, mean +0.85mm). These results support the use of SLA models of human anatomical structures in such areas as pre-operative planning of complex surgical procedures. For applications where higher accuracy is required, improvements can be expected by utilizing smaller pixel resolution in the CT images. Stereolithographic models can now be confidently employed as accurate, three-dimensional replicas of complex, anatomical structures. 14 refs., 2 tabs., 8 figs

  15. Exploring the genetic architecture and improving genomic prediction accuracy for mastitis and milk production traits in dairy cattle by mapping variants to hepatic transcriptomic regions responsive to intra-mammary infection.

    Science.gov (United States)

    Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter

    2017-05-12

    A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set

  16. Great Lakes Restoration Initiative Great Lakes Mussel Watch(2009-2014)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Following the inception of the Great Lakes Restoration Initiative (GLRI) to address the significant environmental issues plaguing the Great Lakes region, the...

  17. Accuracy requirements in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Buzdar, S. A.; Afzal, M.; Nazir, A.; Gadhi, M. A.

    2013-01-01

    Radiation therapy attempts to deliver ionizing radiation to the tumour and can improve the survival chances and/or quality of life of patients. There are chances of errors and uncertainties in the entire process of radiotherapy that may affect the accuracy and precision of treatment management and decrease degree of conformation. All expected inaccuracies, like radiation dose determination, volume calculation, complete evaluation of the full extent of the tumour, biological behaviour of specific tumour types, organ motion during radiotherapy, imaging, biological/molecular uncertainties, sub-clinical diseases, microscopic spread of the disease, uncertainty in normal tissue responses and radiation morbidity need sound appreciation. Conformity can be increased by reduction of such inaccuracies. With the yearly increase in computing speed and advancement in other technologies the future will provide the opportunity to optimize a greater number of variables and reduce the errors in the treatment planning process. In multi-disciplined task of radiotherapy, efforts are needed to overcome the errors and uncertainty, not only by the physicists but also by radiologists, pathologists and oncologists to reduce molecular and biological uncertainties. The radiation therapy physics is advancing towards an optimal goal that is definitely to improve accuracy where necessary and to reduce uncertainty where possible. (author)

  18. Improved accuracy in the estimation of the tearing mode stability parameters (Δ′ and wc) using 2D ECEI data in KSTAR

    International Nuclear Information System (INIS)

    Choi, Minjun J; Yun, Gunsu S; Lee, Woochang; Park, Hyeon K; Park, Young-Seok; Sabbagh, Steve A; Gibson, Kieran J; Bowman, Christopher; Domier, Calvin W; Luhmann, Neville C Jr; Bak, Jun-Gyo; Lee, Sang G

    2014-01-01

    The accuracy in estimation of two important tearing mode stability parameters (Δ′ and w c ) is improved by employing two-dimensional (2D) ECE imaging data which help one to overcome the resolution limit of conventional one-dimensional data. The experimentally measured 2D images are directly compared with synthetic ones from a tearing mode T e model to estimate the parameters and an excellent agreement is achieved. The results imply that the observed tearing mode is classically stable but has non-negligible bootstrap current drive. (paper)

  19. A study on temporal accuracy of OpenFOAM

    Directory of Open Access Journals (Sweden)

    Sang Bong Lee

    2017-07-01

    Full Text Available Crank–Nicolson scheme in native OpenFOAM source libraries was not able to provide 2nd order temporal accuracy of velocity and pressure since the volume flux of convective nonlinear terms was 1st accurate in time. In the present study the simplest way of getting the volume flux with 2nd order accuracy was proposed by using old fluxes. A possible numerical instability originated from an explicit estimation of volume fluxes could be handled by introducing a weighting factor which was determined by observing the ratio of the finally corrected volume flux to the intermediate volume flux at the previous step. The new calculation of volume fluxes was able to provide temporally accurate velocity and pressure with 2nd order. The improvement of temporal accuracy was validated by performing numerical simulations of 2D Taylor–Green vortex of which an exact solution was known and 2D vortex shedding from a circular cylinder.

  20. STACK NUMBER INFLUENCE ON THE ACCURACY OF ASTER GDEM (V2

    Directory of Open Access Journals (Sweden)

    S. M. J. Mirzadeh

    2017-09-01

    Full Text Available In this research, the influence of stack number (STKN on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER Global DEM (GDEM has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.

  1. Quantitative accuracy of serotonergic neurotransmission imaging with high-resolution 123I SPECT

    International Nuclear Information System (INIS)

    Kuikka, J.T.

    2004-01-01

    Aim: Serotonin transporter (SERT) imaging can be used to study the role of regional abnormalities of neurotransmitter release in various mental disorders and to study the mechanism of action of therapeutic drugs or drugs' abuse. We examine the quantitative accuracy and reproducibility that can be achieved with high-resolution SPECT of serotonergic neurotransmission. Method: Binding potential (BP) of 123 I labeled tracer specific for midbrain SERT was assessed in 20 healthy persons. The effects of scatter, attenuation, partial volume, misregistration and statistical noise were estimated using phantom and human studies. Results: Without any correction, BP was underestimated by 73%. The partial volume error was the major component in this underestimation whereas the most critical error for the reproducibility was misplacement of region of interest (ROI). Conclusion: The proper ROI registration, the use of the multiple head gamma camera with transmission based scatter correction introduce more relevant results. However, due to the small dimensions of the midbrain SERT structures and poor spatial resolution of SPECT, the improvement without the partial volume correction is not great enough to restore the estimate of BP to that of the true one. (orig.) [de

  2. Temporal aggregation of migration counts can improve accuracy and precision of trends

    Directory of Open Access Journals (Sweden)

    Tara L. Crewe

    2016-12-01

    Full Text Available Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual using hierarchical models that assumed a Poisson, negative binomial (NB or zero-inflated negative binomial (ZINB count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1 in the wrong direction, or (2 with credible intervals that excluded the true trend, as compared with hourly and daily counts.

  3. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Energy Technology Data Exchange (ETDEWEB)

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  4. Using quality scores and longer reads improves accuracy of Solexa read mapping

    Directory of Open Access Journals (Sweden)

    Xuan Zhenyu

    2008-02-01

    Full Text Available Abstract Background Second-generation sequencing has the potential to revolutionize genomics and impact all areas of biomedical science. New technologies will make re-sequencing widely available for such applications as identifying genome variations or interrogating the oligonucleotide content of a large sample (e.g. ChIP-sequencing. The increase in speed, sensitivity and availability of sequencing technology brings demand for advances in computational technology to perform associated analysis tasks. The Solexa/Illumina 1G sequencer can produce tens of millions of reads, ranging in length from ~25–50 nt, in a single experiment. Accurately mapping the reads back to a reference genome is a critical task in almost all applications. Two sources of information that are often ignored when mapping reads from the Solexa technology are the 3' ends of longer reads, which contain a much higher frequency of sequencing errors, and the base-call quality scores. Results To investigate whether these sources of information can be used to improve accuracy when mapping reads, we developed the RMAP tool, which can map reads having a wide range of lengths and allows base-call quality scores to determine which positions in each read are more important when mapping. We applied RMAP to analyze data re-sequenced from two human BAC regions for varying read lengths, and varying criteria for use of quality scores. RMAP is freely available for downloading at http://rulai.cshl.edu/rmap/. Conclusion Our results indicate that significant gains in Solexa read mapping performance can be achieved by considering the information in 3' ends of longer reads, and appropriately using the base-call quality scores. The RMAP tool we have developed will enable researchers to effectively exploit this information in targeted re-sequencing projects.

  5. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    Directory of Open Access Journals (Sweden)

    Kyuman Lee

    2016-08-01

    Full Text Available The airborne relay-based positioning system (ARPS, which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference

  6. Wind Regimes in Complex Terrain of the Great Valley of Eastern Tennessee

    Energy Technology Data Exchange (ETDEWEB)

    Birdwell, Kevin R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2011-05-01

    This research was designed to provide an understanding of physical wind mechanisms within the complex terrain of the Great Valley of Eastern Tennessee to assess the impacts of regional air flow with regard to synoptic and mesoscale weather changes, wind direction shifts, and air quality. Meteorological data from 2008 2009 were analyzed from 13 meteorological sites along with associated upper level data. Up to 15 ancillary sites were used for reference. Two-step complete linkage and K-means cluster analyses, synoptic weather studies, and ambient meteorological comparisons were performed to generate hourly wind classifications. These wind regimes revealed seasonal variations of underlying physical wind mechanisms (forced channeled, vertically coupled, pressure-driven, and thermally-driven winds). Synoptic and ambient meteorological analysis (mixing depth, pressure gradient, pressure gradient ratio, atmospheric and surface stability) suggested up to 93% accuracy for the clustered results. Probabilistic prediction schemes of wind flow and wind class change were developed through characterization of flow change data and wind class succession. Data analysis revealed that wind flow in the Great Valley was dominated by forced channeled winds (45 67%) and vertically coupled flow (22 38%). Down-valley pressure-driven and thermally-driven winds also played significant roles (0 17% and 2 20%, respectively), usually accompanied by convergent wind patterns (15 20%) and large wind direction shifts, especially in the Central/Upper Great Valley. The behavior of most wind regimes was associated with detectable pressure differences between the Lower and Upper Great Valley. Mixing depth and synoptic pressure gradients were significant contributors to wind pattern behavior. Up to 15 wind classes and 10 sub-classes were identified in the Central Great Valley with 67 joined classes for the Great Valley at-large. Two-thirds of Great Valley at-large flow was defined by 12 classes. Winds

  7. Consensus-based reporting standards for diagnostic test accuracy studies for paratuberculosis in ruminants

    DEFF Research Database (Denmark)

    Gardner, Ian A.; Nielsen, Søren Saxmose; Whittington, Richard

    2011-01-01

    The Standards for Reporting of Diagnostic Accuracy (STARD) statement (www.stard-statement.org) was developed to encourage complete and transparent reporting of key elements of test accuracy studies in human medicine. The statement was motivated by widespread evidence of bias in test accuracy...... studies and the finding that incomplete or absent reporting of items in the STARD checklist was associated with overly optimistic estimates of test performance characteristics. Although STARD principles apply broadly, specific guidelines do not exist to account for unique considerations in livestock...... for Reporting of Animal Diagnostic Accuracy Studies for paratuberculosis), should facilitate improved quality of reporting of the design, conduct and results of paratuberculosis test accuracy studies which were identified as “poor” in a review published in 2008 in Veterinary Microbiology...

  8. Geometric modeling in the problem of ball bearing accuracy

    Science.gov (United States)

    Glukhov, V. I.; Pushkarev, V. V.; Khomchenko, V. G.

    2017-06-01

    The manufacturing quality of ball bearings is an urgent problem for machine-building industry. The aim of the research is to improve the geometric specifications accuracy of bearings based on evidence-based systematic approach and the method of adequate size, location and form deviations modeling of the rings and assembled ball bearings. The present work addressed the problem of bearing geometric specifications identification and the study of these specifications. The deviation from symmetric planar of rings and bearings assembly and mounting width are among these specifications. A systematic approach to geometric specifications values and ball bearings tolerances normalization in coordinate systems will improve the quality of bearings by optimizing and minimizing the number of specifications. The introduction of systematic approach to the international standards on rolling bearings is a guarantee of a significant increase in accuracy of bearings and the quality of products where they are applied.

  9. A new source difference artificial neural network for enhanced positioning accuracy

    International Nuclear Information System (INIS)

    Bhatt, Deepak; Aggarwal, Priyanka; Devabhaktuni, Vijay; Bhattacharya, Prabir

    2012-01-01

    Integrated inertial navigation system (INS) and global positioning system (GPS) units provide reliable navigation solution compared to standalone INS or GPS. Traditional Kalman filter-based INS/GPS integration schemes have several inadequacies related to sensor error model and immunity to noise. Alternatively, multi-layer perceptron (MLP) neural networks with three layers have been implemented to improve the position accuracy of the integrated system. However, MLP neural networks show poor accuracy for low-cost INS because of the large inherent sensor errors. For the first time the paper demonstrates the use of knowledge-based source difference artificial neural network (SDANN) to improve navigation performance of low-cost sensor, with or without external aiding sources. Unlike the conventional MLP or artificial neural networks (ANN), the structure of SDANN consists of two MLP neural networks called the coarse model and the difference model. The coarse model learns the input–output data relationship whereas the difference model adds knowledge to the system and fine-tunes the coarse model output by learning the associated training or estimation error. Our proposed SDANN model illustrated a significant improvement in navigation accuracy of up to 81% over conventional MLP. The results demonstrate that the proposed SDANN method is effective for GPS/INS integration schemes using low-cost inertial sensors, with and without GPS

  10. The Accuracy and Bias of Single-Step Genomic Prediction for Populations Under Selection

    Directory of Open Access Journals (Sweden)

    Wan-Ling Hsu

    2017-08-01

    Full Text Available In single-step analyses, missing genotypes are explicitly or implicitly imputed, and this requires centering the observed genotypes using the means of the unselected founders. If genotypes are only available for selected individuals, centering on the unselected founder mean is not straightforward. Here, computer simulation is used to study an alternative analysis that does not require centering genotypes but fits the mean μg of unselected individuals as a fixed effect. Starting with observed diplotypes from 721 cattle, a five-generation population was simulated with sire selection to produce 40,000 individuals with phenotypes, of which the 1000 sires had genotypes. The next generation of 8000 genotyped individuals was used for validation. Evaluations were undertaken with (J or without (N μg when marker covariates were not centered; and with (JC or without (C μg when all observed and imputed marker covariates were centered. Centering did not influence accuracy of genomic prediction, but fitting μg did. Accuracies were improved when the panel comprised only quantitative trait loci (QTL; models JC and J had accuracies of 99.4%, whereas models C and N had accuracies of 90.2%. When only markers were in the panel, the 4 models had accuracies of 80.4%. In panels that included QTL, fitting μg in the model improved accuracy, but had little impact when the panel contained only markers. In populations undergoing selection, fitting μg in the model is recommended to avoid bias and reduction in prediction accuracy due to selection.

  11. Theta Neurofeedback Effects on Motor Memory Consolidation and Performance Accuracy: An Apparent Paradox?

    Science.gov (United States)

    Reiner, Miriam; Lev, Dror D; Rosen, Amit

    2018-05-15

    Previous studies have shown that theta neurofeedback enhances motor memory consolidation on an easy-to-learn finger-tapping task. However, the simplicity of the finger-tapping task precludes evaluating the putative effects of elevated theta on performance accuracy. Mastering a motor sequence is classically assumed to entail faster performance with fewer errors. The speed-accuracy tradeoff (SAT) principle states that as action speed increases, motor performance accuracy decreases. The current study investigated whether theta neurofeedback could improve both performance speed and performance accuracy, or would only enhance performance speed at the cost of reduced accuracy. A more complex task was used to study the effects of parietal elevated theta on 45 healthy volunteers The findings confirmed previous results on the effects of theta neurofeedback on memory consolidation. In contrast to the two control groups, in the theta-neurofeedback group the speed-accuracy tradeoff was reversed. The speed-accuracy tradeoff patterns only stabilized after a night's sleep implying enhancement in terms of both speed and accuracy. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Compact Intraoperative MRI: Stereotactic Accuracy and Future Directions.

    Science.gov (United States)

    Markowitz, Daniel; Lin, Dishen; Salas, Sussan; Kohn, Nina; Schulder, Michael

    2017-01-01

    Intraoperative imaging must supply data that can be used for accurate stereotactic navigation. This information should be at least as accurate as that acquired from diagnostic imagers. The aim of this study was to compare the stereotactic accuracy of an updated compact intraoperative MRI (iMRI) device based on a 0.15-T magnet to standard surgical navigation on a 1.5-T diagnostic scan MRI and to navigation with an earlier model of the same system. The accuracy of each system was assessed using a water-filled phantom model of the brain. Data collected with the new system were compared to those obtained in a previous study assessing the older system. The accuracy of the new iMRI was measured against standard surgical navigation on a 1.5-T MRI using T1-weighted (W) images. The mean error with the iMRI using T1W images was lower than that based on images from the 1.5-T scan (1.24 vs. 2.43 mm). T2W images from the newer iMRI yielded a lower navigation error than those acquired with the prior model (1.28 vs. 3.15 mm). Improvements in magnet design can yield progressive increases in accuracy, validating the concept of compact, low-field iMRI. Avoiding the need for registration between image and surgical space increases navigation accuracy. © 2017 S. Karger AG, Basel.

  13. GALA: Group Analysis Leads to Accuracy, a novel approach for solving the inverse problem in exploratory analysis of group MEG recordings

    Directory of Open Access Journals (Sweden)

    Vladimir eKozunov

    2015-04-01

    Full Text Available Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis.We propose Group Analysis Leads to Accuracy (GALA - a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects.A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face

  14. Accuracy of clinical coding for procedures in oral and maxillofacial surgery.

    Science.gov (United States)

    Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I

    2016-10-01

    Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Accuracy of past projections of US energy consumption

    International Nuclear Information System (INIS)

    O'Neill, B.C.; Desai, Mausami

    2005-01-01

    Energy forecasts play a key role in development of energy and environmental policy. Evaluations of the accuracy of past projections can provide insight into the uncertainty that may be associated with current forecasts. They can also be used to identify sources of inaccuracies, and potentially lead to improvements in projections over time. Here we assess the accuracy of projections of US energy consumption produced by the Energy Information Administration over the period 1982-2000. We find that energy consumption projections have tended to underestimate future consumption. Projections 10-13 years into the future have had an average error of about 4%, and about half that for shorter time horizons. These errors mask much larger, offsetting errors in the projection of GDP and energy intensity (EI). GDP projections have consistently been too high, and EI projection consistently too low, by more than 15% for projections of 10 years or more. Further work on the source of these sizable inaccuracies should be a high priority. Finally, we find no evidence of improvement in projections of consumption, GDP, or EI since 1982

  16. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    Science.gov (United States)

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-06

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty.

  17. An evaluation of The Great Escape: can an interactive computer game improve young children's fire safety knowledge and behaviors?

    Science.gov (United States)

    Morrongiello, Barbara A; Schwebel, David C; Bell, Melissa; Stewart, Julia; Davis, Aaron L

    2012-07-01

    Fire is a leading cause of unintentional injury and, although young children are at particularly increased risk, there are very few evidence-based resources available to teach them fire safety knowledge and behaviors. Using a pre-post randomized design, the current study evaluated the effectiveness of a computer game (The Great Escape) for teaching fire safety information to young children (3.5-6 years). Using behavioral enactment procedures, children's knowledge and behaviors related to fire safety were compared to a control group of children before and after receiving the intervention. The results indicated significant improvements in knowledge and fire safety behaviors in the intervention group but not the control. Using computer games can be an effective way to promote young children's understanding of safety and how to react in different hazardous situations.

  18. On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.

    Science.gov (United States)

    Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis

    2016-07-01

    To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Potential Child Abuse Screening in Emergency Department; a Diagnostic Accuracy Study

    Directory of Open Access Journals (Sweden)

    Hossein Dinpanah

    2017-01-01

    Full Text Available Introduction: Designing a tool that can differentiate those at risk of child abuse with great diagnostic accuracyis of great interest. The present study was designed to evaluate the diagnostic accuracy of Escape instrumentin triage of at risk cases of child abuse presenting to emergency department (ED. Methods: The present diagnosticaccuracy study performed on 6120 of the children under 16 years old presented to ED during 3 years,using convenience sampling. Confirmation by the child abuse team (pediatrician, a socialworker, and a forensicphysician was considered as the gold standard. Screening performance characteristics of Escape were calculatedusing STATA 21. Results: 6120 children with the mean age of 2.19 § 1.12 years were screened (52.7% girls.137 children were suspected victims of child abuse. Based on child abuse team opinion, 35 (0.5% children wereconfirmed victims of child abuse. Sensitivity, specificity, positive and negative likelihood ratio and positive andnegative predictive values of this test with 95% CI were 100 (87.6 – 100, 98.3 (97.9 – 98.6, 25.5 (18.6 – 33.8, 100(99.9 – 100, 0.34 (0.25 – 0.46, and 0 (0 – NAN, respectively. Area under the ROC curve was 99.2 (98.9 – 99.4.Conclusion: It seems that Escape is a suitable screening instrument for detection of at risk cases of child abusepresenting to ED. Based on the results of the present study, the accuracy of this screening tool is 99.2%, which isin the excellent range.

  20. Great Lakes Literacy Principles

    Science.gov (United States)

    Fortner, Rosanne W.; Manzo, Lyndsey

    2011-03-01

    Lakes Superior, Huron, Michigan, Ontario, and Erie together form North America's Great Lakes, a region that contains 20% of the world's fresh surface water and is home to roughly one quarter of the U.S. population (Figure 1). Supporting a $4 billion sport fishing industry, plus $16 billion annually in boating, 1.5 million U.S. jobs, and $62 billion in annual wages directly, the Great Lakes form the backbone of a regional economy that is vital to the United States as a whole (see http://www.miseagrant.umich.edu/downloads/economy/11-708-Great-Lakes-Jobs.pdf). Yet the grandeur and importance of this freshwater resource are little understood, not only by people in the rest of the country but also by many in the region itself. To help address this lack of knowledge, the Centers for Ocean Sciences Education Excellence (COSEE) Great Lakes, supported by the U.S. National Science Foundation and the National Oceanic and Atmospheric Administration, developed literacy principles for the Great Lakes to serve as a guide for education of students and the public. These “Great Lakes Literacy Principles” represent an understanding of the Great Lakes' influences on society and society's influences on the Great Lakes.

  1. Improving the Accuracy of the Water Surface Cover Type in the 30 m FROM-GLC Product

    Directory of Open Access Journals (Sweden)

    Luyan Ji

    2015-10-01

    Full Text Available The finer resolution observation and monitoring of the global land cover (FROM-GLC product makes it the first 30 m resolution global land cover product from which one can extract a global water mask. However, two major types of misclassification exist with this product due to spectral similarity and spectral mixing. Mountain and cloud shadows are often incorrectly classified as water since they both have very low reflectance, while more water pixels at the boundaries of water bodies tend to be misclassified as land. In this paper, we aim to improve the accuracy of the 30 m FROM-GLC water mask by addressing those two types of errors. For the first, we adopt an object-based method by computing the topographical feature, spectral feature, and geometrical relation with cloud for every water object in the FROM-GLC water mask, and set specific rules to determine whether a water object is misclassified. For the second, we perform a local spectral unmixing using a two-endmember linear mixing model for each pixel falling in the water-land boundary zone that is 8-neighborhood connected to water-land boundary pixels. Those pixels with big enough water fractions are determined as water. The procedure is automatic. Experimental results show that the total area of inland water has been decreased by 15.83% in the new global water mask compared with the FROM-GLC water mask. Specifically, more than 30% of the FROM-GLC water objects have been relabeled as shadows, and nearly 8% of land pixels in the water-land boundary zone have been relabeled as water, whereas, on the contrary, fewer than 2% of water pixels in the same zone have been relabeled as land. As a result, both the user’s accuracy and Kappa coefficient of the new water mask (UA = 88.39%, Kappa = 0.87 have been substantially increased compared with those of the FROM-GLC product (UA = 81.97%, Kappa = 0.81.

  2. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    OpenAIRE

    Ntaios, G.; Papavasileiou, V.; Faouzi, M.; Vanacker, P.; Wintermark, M.; Michel, P.

    2014-01-01

    BACKGROUND: The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. AIM: The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. METHODS: All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model o...

  3. Improvement in the accuracy of polymer gel dosimeters using scintillating fibers

    International Nuclear Information System (INIS)

    Tremblay, Nicolas M; Hubert-Tremblay, Vincent; Bujold, Rachel; Beaulieu, Luc; Lepage, Martin

    2010-01-01

    We propose a novel method for the absolute calibration of polyacrylamide gel (PAG) dosimeters with one or more reference scintillating fiber dosimeters inserted inside the gel. Four calibrated scintillating fibers were inserted into a cylindrical glass container filled with a PAG dosimeter irradiated with a wedge filtered 6 MV photon beam. Calibration curves using small glass vials containing the same gel as the cylindrical containers were used to obtain a first calibration curve. This calibration curve was then adjusted with the dose measured with one of the scintillating fibers in a low gradient part of the field using different approaches. Among these, it was found that a translation of the gel calibration curve yielded the highest accuracy with PAG dosimeters.

  4. Pushing the limits of NAA. Accuracy, uncertainty and detection limits

    International Nuclear Information System (INIS)

    Greenberg, R.R.

    2008-01-01

    This paper describes some highlights from the author's efforts to improve neutron activation analysis (NAA) detection limits through development and optimization of radiochemical separations, as well as to improve the overall accuracy of NAA measurements by identifying, quantifying and reducing measurement biases and uncertainties. Efforts to demonstrate the metrological basis of NAA, and to establish it as a 'Primary Method of Measurement' will be discussed. (author)

  5. Geometric accuracy of wax bade models manufactured in silicon moulds

    Directory of Open Access Journals (Sweden)

    G. Budzik

    2010-01-01

    Full Text Available The article presents the test results of the geometric accuracy of wax blade models manufactured in silicon moulds in the Rapid Tooling process, with the application of the Vacuum Casting technology. In batch production casting waxes are designed for the manufacture of models and components of model sets through injection into a metal die. The objective of the tests was to determine the possibility of using traditional wax for the production of casting models in the rapid prototyping process. Blade models made of five types of casting wax were measured. The definition of the geometric accuracy of wax blade models makes it possible to introduce individual modifications aimed at improving their shape in order to increase the dimensional accuracy of blade models manufactured in the rapid prototyping process.

  6. High-throughput microsatellite genotyping in ecology: improved accuracy, efficiency, standardization and success with low-quantity and degraded DNA.

    Science.gov (United States)

    De Barba, M; Miquel, C; Lobréaux, S; Quenette, P Y; Swenson, J E; Taberlet, P

    2017-05-01

    Microsatellite markers have played a major role in ecological, evolutionary and conservation research during the past 20 years. However, technical constrains related to the use of capillary electrophoresis and a recent technological revolution that has impacted other marker types have brought to question the continued use of microsatellites for certain applications. We present a study for improving microsatellite genotyping in ecology using high-throughput sequencing (HTS). This approach entails selection of short markers suitable for HTS, sequencing PCR-amplified microsatellites on an Illumina platform and bioinformatic treatment of the sequence data to obtain multilocus genotypes. It takes advantage of the fact that HTS gives direct access to microsatellite sequences, allowing unambiguous allele identification and enabling automation of the genotyping process through bioinformatics. In addition, the massive parallel sequencing abilities expand the information content of single experimental runs far beyond capillary electrophoresis. We illustrated the method by genotyping brown bear samples amplified with a multiplex PCR of 13 new microsatellite markers and a sex marker. HTS of microsatellites provided accurate individual identification and parentage assignment and resulted in a significant improvement of genotyping success (84%) of faecal degraded DNA and costs reduction compared to capillary electrophoresis. The HTS approach holds vast potential for improving success, accuracy, efficiency and standardization of microsatellite genotyping in ecological and conservation applications, especially those that rely on profiling of low-quantity/quality DNA and on the construction of genetic databases. We discuss and give perspectives for the implementation of the method in the light of the challenges encountered in wildlife studies. © 2016 John Wiley & Sons Ltd.

  7. STARD-BLCM: Standards for the Reporting of Diagnostic accuracy studies that use Bayesian Latent Class Models

    DEFF Research Database (Denmark)

    Kostoulas, Polychronis; Nielsen, Søren S.; Branscum, Adam J.

    2017-01-01

    The Standards for the Reporting of Diagnostic Accuracy (STARD) statement, which was recently updated to the STARD2015 statement, was developed to encourage complete and transparent reporting of test accuracy studies. Although STARD principles apply broadly, the checklist is limited to studies......-BLCM (Standards for Reporting of Diagnostic accuracy studies that use Bayesian Latent Class Models), will facilitate improved quality of reporting on the design, conduct and results of diagnostic accuracy studies that use Bayesian latent class models....

  8. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    Science.gov (United States)

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  9. Data requirements of GREAT-ER: Modelling and validation using LAS in four UK catchments

    International Nuclear Information System (INIS)

    Price, Oliver R.; Munday, Dawn K.; Whelan, Mick J.; Holt, Martin S.; Fox, Katharine K.; Morris, Gerard; Young, Andrew R.

    2009-01-01

    Higher-tier environmental risk assessments on 'down-the-drain' chemicals in river networks can be conducted using models such as GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers). It is important these models are evaluated and their sensitivities to input variables understood. This study had two primary objectives: evaluate GREAT-ER model performance, comparing simulated modelled predictions for LAS (linear alkylbenzene sulphonate) with measured concentrations, for four rivers in the UK, and investigate model sensitivity to input variables. We demonstrate that the GREAT-ER model is very sensitive to variability in river discharges. However it is insensitive to the form of distributions used to describe chemical usage and removal rate in sewage treatment plants (STPs). It is concluded that more effort should be directed towards improving empirical estimates of effluent load and reducing uncertainty associated with usage and removal rates in STPs. Simulations could be improved by incorporating the effect of river depth on dissipation rates. - Validation of GREAT-ER.

  10. Data requirements of GREAT-ER: Modelling and validation using LAS in four UK catchments

    Energy Technology Data Exchange (ETDEWEB)

    Price, Oliver R., E-mail: oliver.price@unilever.co [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Munday, Dawn K. [Safety and Environmental Assurance Centre, Unilever, Colworth Science Park, Sharnbrook, Bedfordshire MK44 1LQ (United Kingdom); Whelan, Mick J. [Department of Natural Resources, School of Applied Sciences, Cranfield University, College Road, Cranfield, Bedfordshire MK43 0AL (United Kingdom); Holt, Martin S. [ECETOC, Ave van Nieuwenhuyse 4, Box 6, B-1160 Brussels (Belgium); Fox, Katharine K. [85 Park Road West, Birkenhead, Merseyside CH43 8SQ (United Kingdom); Morris, Gerard [Environment Agency, Phoenix House, Global Avenue, Leeds LS11 8PG (United Kingdom); Young, Andrew R. [Wallingford HydroSolutions Ltd, Maclean building, Crowmarsh Gifford, Wallingford, Oxon OX10 8BB (United Kingdom)

    2009-10-15

    Higher-tier environmental risk assessments on 'down-the-drain' chemicals in river networks can be conducted using models such as GREAT-ER (Geography-referenced Regional Exposure Assessment Tool for European Rivers). It is important these models are evaluated and their sensitivities to input variables understood. This study had two primary objectives: evaluate GREAT-ER model performance, comparing simulated modelled predictions for LAS (linear alkylbenzene sulphonate) with measured concentrations, for four rivers in the UK, and investigate model sensitivity to input variables. We demonstrate that the GREAT-ER model is very sensitive to variability in river discharges. However it is insensitive to the form of distributions used to describe chemical usage and removal rate in sewage treatment plants (STPs). It is concluded that more effort should be directed towards improving empirical estimates of effluent load and reducing uncertainty associated with usage and removal rates in STPs. Simulations could be improved by incorporating the effect of river depth on dissipation rates. - Validation of GREAT-ER.

  11. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    Science.gov (United States)

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  12. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Prashant, E-mail: p.patel@bham.ac.uk [University of Birmingham, School of Cancer Sciences, Vincent Drive (United Kingdom); Rangarajan, Balaji; Mangat, Kamarjit, E-mail: kamarjit.mangat@uhb.nhs.uk, E-mail: kamarjit.mangat@nhs.net [University Hospital Birmingham NHS Trust, Department of Radiology (United Kingdom)

    2015-08-15

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree.

  13. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    International Nuclear Information System (INIS)

    Patel, Prashant; Rangarajan, Balaji; Mangat, Kamarjit

    2015-01-01

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree

  14. Short-term Inundation Forecasting for Tsunamis Version 4.0 Brings Forecasting Speed, Accuracy, and Capability Improvements to NOAA's Tsunami Warning Centers

    Science.gov (United States)

    Sterling, K.; Denbo, D. W.; Eble, M. C.

    2016-12-01

    Short-term Inundation Forecasting for Tsunamis (SIFT) software was developed by NOAA's Pacific Marine Environmental Laboratory (PMEL) for use in tsunami forecasting and has been used by both U.S. Tsunami Warning Centers (TWCs) since 2012, when SIFTv3.1 was operationally accepted. Since then, advancements in research and modeling have resulted in several new features being incorporated into SIFT forecasting. Following the priorities and needs of the TWCs, upgrades to SIFT forecasting were implemented into SIFTv4.0, scheduled to become operational in October 2016. Because every minute counts in the early warning process, two major time saving features were implemented in SIFT 4.0. To increase processing speeds and generate high-resolution flooding forecasts more quickly, the tsunami propagation and inundation codes were modified to run on Graphics Processing Units (GPUs). To reduce time demand on duty scientists during an event, an automated DART inversion (or fitting) process was implemented. To increase forecasting accuracy, the forecasted amplitudes and inundations were adjusted to include dynamic tidal oscillations, thereby reducing the over-estimates of flooding common in SIFTv3.1 due to the static tide stage conservatively set at Mean High Water. Further improvements to forecasts were gained through the assimilation of additional real-time observations. Cabled array measurements from Bottom Pressure Recorders (BPRs) in the Oceans Canada NEPTUNE network are now available to SIFT for use in the inversion process. To better meet the needs of harbor masters and emergency managers, SIFTv4.0 adds a tsunami currents graphical product to the suite of disseminated forecast results. When delivered, these new features in SIFTv4.0 will improve the operational tsunami forecasting speed, accuracy, and capabilities at NOAA's Tsunami Warning Centers.

  15. Measurement improvements of heat flux probes for internal combustion engine; Nainen kikan ni okeru netsuryusokukei no kaihatsu to kento

    Energy Technology Data Exchange (ETDEWEB)

    Tajima, H; Tasaka, H [Miyazaki University, Miyazaki (Japan)

    1997-10-01

    In heat flux measurement in engines, material properties of a heat flux probe and numerical prediction of those influence have been discussed rather than practical measurement accuracy. This study featured the process for the quantitative examination of heat flux probes. Although the process required direct comparison among all the probes and additional measurements in a constant volume bomb, precision of heat flux measurement was greatly improved so that the essential characteristics of heat transfer in engines can be detected. 9 refs., 8 figs., 1 tab.

  16. [In vivo model to evaluate the accuracy of complete-tooth spectrophotometer for dental clinics].

    Science.gov (United States)

    Liu, Feng; Yang, Jian; Xu, Tong-Kai; Xu, Ming-Ming; Ma, Yu

    2011-02-01

    To test ΔE between measured value and right value from the Crystaleye complete-tooth spectrophotometer, and to evaluate the accuracy rate of the spectrophotometer. Twenty prosthodontists participated in the study. Each of them used Vita 3D-Master shadeguide to do the shade matching, and used Crystaleye complete-tooth spectrophotometer (before and after the test training) tested the middle of eight fixed tabs from shadeguide in the dark box. The results of shade matching and spectrophotometer were recorded. The accuracy rate of shade matching and the spectrophotometer before and after training were calculated. The average accuracy rate of shade matching was 49%. The average accuracy rate of the spectrophotometer before and after training was 83% and 99%. The accuracy of the spectrophotometer was significant higher than that in shade matching, and training can improve the accuracy rate.

  17. Combining Diffusion Tensor Metrics and DSC Perfusion Imaging: Can It Improve the Diagnostic Accuracy in Differentiating Tumefactive Demyelination from High-Grade Glioma?

    Science.gov (United States)

    Hiremath, S B; Muraleedharan, A; Kumar, S; Nagesh, C; Kesavadas, C; Abraham, M; Kapilamoorthy, T R; Thomas, B

    2017-04-01

    Tumefactive demyelinating lesions with atypical features can mimic high-grade gliomas on conventional imaging sequences. The aim of this study was to assess the role of conventional imaging, DTI metrics ( p:q tensor decomposition), and DSC perfusion in differentiating tumefactive demyelinating lesions and high-grade gliomas. Fourteen patients with tumefactive demyelinating lesions and 21 patients with high-grade gliomas underwent brain MR imaging with conventional, DTI, and DSC perfusion imaging. Imaging sequences were assessed for differentiation of the lesions. DTI metrics in the enhancing areas and perilesional hyperintensity were obtained by ROI analysis, and the relative CBV values in enhancing areas were calculated on DSC perfusion imaging. Conventional imaging sequences had a sensitivity of 80.9% and specificity of 57.1% in differentiating high-grade gliomas ( P = .049) from tumefactive demyelinating lesions. DTI metrics ( p : q tensor decomposition) and DSC perfusion demonstrated a statistically significant difference in the mean values of ADC, the isotropic component of the diffusion tensor, the anisotropic component of the diffusion tensor, the total magnitude of the diffusion tensor, and rCBV among enhancing portions in tumefactive demyelinating lesions and high-grade gliomas ( P ≤ .02), with the highest specificity for ADC, the anisotropic component of the diffusion tensor, and relative CBV (92.9%). Mean fractional anisotropy values showed no significant statistical difference between tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI and DSC parameters improved the diagnostic accuracy (area under the curve = 0.901). Addition of a heterogeneous enhancement pattern to DTI and DSC parameters improved it further (area under the curve = 0.966). The sensitivity increased from 71.4% to 85.7% after the addition of the enhancement pattern. DTI and DSC perfusion add profoundly to conventional imaging in differentiating tumefactive

  18. The effect of spectral filters on reading speed and accuracy following stroke

    Directory of Open Access Journals (Sweden)

    Ian G. Beasley

    2013-07-01

    Conclusions: The present study has shown that spectral filters can immediately improve reading speed and accuracy following stroke, whereas prolonged use does not increase these benefits significantly.

  19. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative

    International Nuclear Information System (INIS)

    Bossuyt, P.M.; Reitsma, J.B.; Bruns, D.E.; Gatsonis, C.A.; Glasziou, P.P.; Irwig, L.M.; Lijmer, J.G.; Moher, D.; Rennie, D.; Vet, H.C.W. de

    2003-01-01

    AIM: To improve the accuracy and completeness of reporting of studies of diagnostic accuracy in order to allow readers to assess the potential for bias in a study and to evaluate the general isability of its results. METHODS: The standards for reporting of diagnostic accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a 2 day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of diagnostic accuracy. RESULTS: The search for published guidelines about diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. At the consensus meeting, participants shortened the list to a 25-item checklist, by using evidence whenever available. A prototype of a flow diagram provides information about the method of recruitment of patients, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard, or both. CONCLUSIONS: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of diagnostic accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public

  20. How does language model size effects speech recognition accuracy for the Turkish language?

    Directory of Open Access Journals (Sweden)

    Behnam ASEFİSARAY

    2016-05-01

    Full Text Available In this paper we aimed at investigating the effect of Language Model (LM size on Speech Recognition (SR accuracy. We also provided details of our approach for obtaining the LM for Turkish. Since LM is obtained by statistical processing of raw text, we expect that by increasing the size of available data for training the LM, SR accuracy will improve. Since this study is based on recognition of Turkish, which is a highly agglutinative language, it is important to find out the appropriate size for the training data. The minimum required data size is expected to be much higher than the data needed to train a language model for a language with low level of agglutination such as English. In the experiments we also tried to adjust the Language Model Weight (LMW and Active Token Count (ATC parameters of LM as these are expected to be different for a highly agglutinative language. We showed that by increasing the training data size to an appropriate level, the recognition accuracy improved on the other hand changes on LMW and ATC did not have a positive effect on Turkish speech recognition accuracy.

  1. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    Science.gov (United States)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  2. The Accuracy Of Fuzzy Sugeno Method With Antropometry On Determination Natural Patient Status

    Science.gov (United States)

    Syahputra, Dinur; Tulus; Sawaluddin

    2017-12-01

    Anthropometry is one of the processes that can be used to assess nutritional status. In general anthropometry is defined as body size in terms of nutrition, then anthropometry is reviewed from various age levels and nutritional levels. Nutritional status is a description of the balance between nutritional intake with the needs of the body individually. Fuzzy logic is a logic that has a vagueness between right and wrong or between 0 and 1. Sugeno method is used because in the process of calculating nutritional status so far is still done by anthropometry. Currently information technology is growing in any aspect, one of them in the aspect of calculation with data taken from anthropometry. In this case the calculation can use the Fuzzy Sugeno Method, in order to know the great accuracy obtained. Then the results obtained using fuzzy sugeno integrated with anthropometry has an accuracy of 81.48%.

  3. Speed and accuracy of visual image discrimination by rats

    Directory of Open Access Journals (Sweden)

    Pamela eReinagel

    2013-12-01

    Full Text Available The trade-off between speed and accuracy of sensory discrimination has most often been studying using sensory stimuli that evolve over time, such as random dot motion discrimination tasks. We previously reported that when rats perform motion discrimination, correct trials have longer reaction times than errors, accuracy increases with reaction time, and reaction time increases with stimulus ambiguity. In such experiments, new sensory information is continually presented, which could partly explain interactions between reaction time and accuracy. The present study shows that a changing physical stimulus is not essential to those findings. Freely behaving rats were trained to discriminate between two static visual images in a self-paced, 2-alternative forced-choice (2AFC reaction time task. Each trial was initiated by the rat, and the two images were presented simultaneously and persisted until the rat responded, with no time limit. Reaction times were longer in correct trials than in error trials, and accuracy increased with reaction time, comparable to results previously reported for rats performing motion discrimination. In the motion task, coherence has been used to vary discrimination difficulty. Here morphs between the previously learned images were used to parametrically vary the image similarity. In randomly interleaved trials, rats took more time on average to respond in trials in which they had to discriminate more similar stimuli. For both the motion and image tasks, the dependence of reaction time on ambiguity is weak, as if rats prioritized speed over accuracy. Therefore we asked whether rats can change the priority of speed and accuracy adaptively in response to a change in reward contingencies. For two rats, the penalty delay was increased from two to six seconds. When the penalty was longer, reaction times increased, and accuracy improved. This demonstrates that rats can flexibly adjust their behavioral strategy in response to the

  4. New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting, October 28, 2011.

    Science.gov (United States)

    Walsh, John; Roberts, Ruth; Vigersky, Robert A; Schwartz, Frank

    2012-03-01

    Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in postapproval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs' analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report. © 2012 Diabetes Technology Society.

  5. Direction of Arrival Estimation Accuracy Enhancement via Using Displacement Invariance Technique

    Directory of Open Access Journals (Sweden)

    Youssef Fayad

    2015-01-01

    Full Text Available A new algorithm for improving Direction of Arrival Estimation (DOAE accuracy has been carried out. Two contributions are introduced. First, Doppler frequency shift that resulted from the target movement is estimated using the displacement invariance technique (DIT. Second, the effect of Doppler frequency is modeled and incorporated into ESPRIT algorithm in order to increase the estimation accuracy. It is worth mentioning that the subspace approach has been employed into ESPRIT and DIT methods to reduce the computational complexity and the model’s nonlinearity effect. The DOAE accuracy has been verified by closed-form Cramér-Rao bound (CRB. The simulation results of the proposed algorithm are better than those of the previous estimation techniques leading to the estimator performance enhancement.

  6. On the Accuracy of Language Trees

    Science.gov (United States)

    Pompei, Simone; Loreto, Vittorio; Tria, Francesca

    2011-01-01

    Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034

  7. On the accuracy of language trees.

    Directory of Open Access Journals (Sweden)

    Simone Pompei

    Full Text Available Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve

  8. Temporary shielding of hot spots in the drainage areas of cutaneous melanoma improves accuracy of lymphoscintigraphic sentinel lymph node diagnostics

    International Nuclear Information System (INIS)

    Maza, S.; Valencia, R.; Geworski, L.; Zander, A.; Munz, D.L.; Draeger, E.; Winter, H.; Sterry, W.

    2002-01-01

    Detection of the ''true'' sentinel lymph nodes, permitting correct staging of regional lymph nodes, is essential for management and prognostic assessment in malignant melanoma. In this study, it was prospectively evaluated whether simple temporary shielding of hot spots in lymphatic drainage areas could improve the accuracy of sentinel lymph node diagnostics. In 100 consecutive malignant melanoma patients (45 women, 55 men; age 11-91 years), dynamic and static lymphoscintigraphy in various views was performed after strict intracutaneous application of technetium-99m nanocolloid (40-150 MBq; 0.05 ml/deposit) around the tumour (31 patients) or the biopsy scar (69 patients, safety distance 1 cm). The images were acquired with and without temporary lead shielding of the most prominent hot spots in the drainage area. In 33/100 patients, one or two additional sentinel lymph nodes that showed less tracer accumulation or were smaller (<1.5 cm) were detected after shielding. Four of these patients had metastases in the sentinel lymph nodes; the non-sentinel lymph nodes were tumour negative. In 3/100 patients, hot spots in the drainage area proved to be lymph vessels, lymph vessel intersections or lymph vessel ectasias after temporary shielding; hence, a node interpreted as a non-sentinel lymph node at first glance proved to be the real sentinel lymph node. In two of these patients, lymph node metastasis was histologically confirmed; the non-sentinel lymph nodes were tumour free. In 7/100 patients the exact course of lymph vessels could be mapped after shielding. In one of these patients, two additional sentinel lymph nodes (with metastasis) were detected. Overall, in 43/100 patients the temporary shielding yielded additional information, with sentinel lymph node metastases in 7%. In conclusion, when used in combination with dynamic acquisition in various views, temporary shielding of prominent hot spots in the drainage area of a malignant melanoma of the skin leads to an

  9. Conceptual ecological models to guide integrated landscape monitoring of the Great Basin

    Science.gov (United States)

    Miller, D.M.; Finn, S.P.; Woodward, Andrea; Torregrosa, Alicia; Miller, M.E.; Bedford, D.R.; Brasher, A.M.

    2010-01-01

    The Great Basin Integrated Landscape Monitoring Pilot Project was developed in response to the need for a monitoring and predictive capability that addresses changes in broad landscapes and waterscapes. Human communities and needs are nested within landscapes formed by interactions among the hydrosphere, geosphere, and biosphere. Understanding the complex processes that shape landscapes and deriving ways to manage them sustainably while meeting human needs require sophisticated modeling and monitoring. This document summarizes current understanding of ecosystem structure and function for many of the ecosystems within the Great Basin using conceptual models. The conceptual ecosystem models identify key ecological components and processes, identify external drivers, develop a hierarchical set of models that address both site and landscape attributes, inform regional monitoring strategy, and identify critical gaps in our knowledge of ecosystem function. The report also illustrates an approach for temporal and spatial scaling from site-specific models to landscape models and for understanding cumulative effects. Eventually, conceptual models can provide a structure for designing monitoring programs, interpreting monitoring and other data, and assessing the accuracy of our understanding of ecosystem functions and processes.

  10. A Smartphone-Based Application Improves the Accuracy, Completeness, and Timeliness of Cattle Disease Reporting and Surveillance in Ethiopia

    Directory of Open Access Journals (Sweden)

    Tariku Jibat Beyene

    2018-01-01

    Full Text Available Accurate disease reporting, ideally in near real time, is a prerequisite to detecting disease outbreaks and implementing appropriate measures for their control. This study compared the performance of the traditional paper-based approach to animal disease reporting in Ethiopia to one using an application running on smartphones. In the traditional approach, the total number of cases for each disease or syndrome was aggregated by animal species and reported to each administrative level at monthly intervals; while in the case of the smartphone application demographic information, a detailed list of presenting signs, in addition to the putative disease diagnosis were immediately available to all administrative levels via a Cloud-based server. While the smartphone-based approach resulted in much more timely reporting, there were delays due to limited connectivity; these ranged on average from 2 days (in well-connected areas up to 13 days (in more rural locations. We outline the challenges that would likely be associated with any widespread rollout of a smartphone-based approach such as the one described in this study but demonstrate that in the long run the approach offers significant benefits in terms of timeliness of disease reporting, improved data integrity and greatly improved animal disease surveillance.

  11. Additional measures do not improve the diagnostic accuracy of the Hospital Admission Risk Profile for detecting downstream quality of life in community-dwelling older people presenting to a hospital emergency department.

    Science.gov (United States)

    Grimmer, K; Milanese, S; Beaton, K; Atlas, A

    2014-01-01

    The Hospital Admission Risk Profile (HARP) instrument is commonly used to assess risk of functional decline when older people are admitted to hospital. HARP has moderate diagnostic accuracy (65%) for downstream decreased scores in activities of daily living. This paper reports the diagnostic accuracy of HARP for downstream quality of life. It also tests whether adding other measures to HARP improves its diagnostic accuracy. One hundred and forty-eight independent community dwelling individuals aged 65 years or older were recruited in the emergency department of one large Australian hospital with a medical problem for which they were discharged without a hospital ward admission. Data, including age, sex, primary language, highest level of education, postcode, living status, requiring care for daily activities, using a gait aid, receiving formal community supports, instrumental activities of daily living in the last week, hospitalization and falls in the last 12 months, and mental state were collected at recruitment. HARP scores were derived from a formula that summed scores assigned to age, activities of daily living, and mental state categories. Physical and mental component scores of a quality of life measure were captured by telephone interview at 1 and 3 months after recruitment. HARP scores are moderately accurate at predicting downstream decline in physical quality of life, but did not predict downstream decline in mental quality of life. The addition of other variables to HARP did not improve its diagnostic accuracy for either measure of quality of life. HARP is a poor predictor of quality of life.

  12. Use of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) for improving the accuracy of the risk classification of type 1 diabetes.

    Science.gov (United States)

    Sosenko, Jay M; Skyler, Jay S; Mahon, Jeffrey; Krischer, Jeffrey P; Greenbaum, Carla J; Rafkin, Lisa E; Beam, Craig A; Boulware, David C; Matheson, Della; Cuthbertson, David; Herold, Kevan C; Eisenbarth, George; Palmer, Jerry P

    2014-04-01

    OBJECTIVE We studied the utility of the Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) for improving the accuracy of type 1 diabetes (T1D) risk classification in TrialNet Natural History Study (TNNHS) participants. RESEARCH DESIGN AND METHODS The cumulative incidence of T1D was compared between normoglycemic individuals with DPTRS values >7.00 and dysglycemic individuals in the TNNHS (n = 991). It was also compared between individuals with DPTRS values 7.00 among those with dysglycemia and those with multiple autoantibodies in the TNNHS. DPTRS values >7.00 were compared with dysglycemia for characterizing risk in Diabetes Prevention Trial-Type 1 (DPT-1) (n = 670) and TNNHS participants. The reliability of DPTRS values >7.00 was compared with dysglycemia in the TNNHS. RESULTS The cumulative incidence of T1D for normoglycemic TNNHS participants with DPTRS values >7.00 was comparable to those with dysglycemia. Among those with dysglycemia, the cumulative incidence was much higher (P 7.00 than for those with values 7.00). Dysglycemic individuals in DPT-1 were at much higher risk for T1D than those with dysglycemia in the TNNHS (P 7.00. The proportion in the TNNHS reverting from dysglycemia to normoglycemia at the next visit was higher than the proportion reverting from DPTRS values >7.00 to values <7.00 (36 vs. 23%). CONCLUSIONS DPTRS thresholds can improve T1D risk classification accuracy by identifying high-risk normoglycemic and low-risk dysglycemic individuals. The 7.00 DPTRS threshold characterizes risk more consistently between populations and has greater reliability than dysglycemia.

  13. Accuracy synthesis of T-shaped exit fixed mechanism in a double-crystal monochromator

    International Nuclear Information System (INIS)

    Wang Fengqin; Cao Chongzhen; Wang Jidai; Li Yushan; Gao Xueguan

    2007-01-01

    It is a key performance requirement for a double-crystal monochromator that the exit is fixed, and in order to improve the height accuracy of the exit in T-shaped exit fixed mechanism, the expression between the height of the exit and various original errors was put forward using geometrical analysis method. According to the independent action principle of original errors, accuracy synthesis of T-shaped exit fixed mechanism was studied by using the equal accuracy method, and the tolerance ranges of original errors were obtained. How to calculate the tolerance ranges of original errors was explained by giving an example. (authors)

  14. Artificial reefs and reef restoration in the Laurentian Great Lakes

    Science.gov (United States)

    McLean, Matthew W.; Roseman, Edward; Pritt, Jeremy J.; Kennedy, Gregory W.; Manny, Bruce A.

    2015-01-01

    We reviewed the published literature to provide an inventory of Laurentian Great Lakes artificial reef projects and their purposes. We also sought to characterize physical and biological monitoring for artificial reef projects in the Great Lakes and determine the success of artificial reefs in meeting project objectives. We found records of 6 artificial reefs in Lake Erie, 8 in Lake Michigan, 3 in Lakes Huron and Ontario, and 2 in Lake Superior. We found 9 reefs in Great Lakes connecting channels and 6 reefs in Great Lakes tributaries. Objectives of artificial reef creation have included reducing impacts of currents and waves, providing safe harbors, improving sport-fishing opportunities, and enhancing/restoring fish spawning habitats. Most reefs in the lakes themselves were incidental (not created purposely for fish habitat) or built to improve local sport fishing, whereas reefs in tributaries and connecting channels were more frequently built to benefit fish spawning. Levels of assessment of reef performance varied; but long-term monitoring was uncommon as was assessment of physical attributes. Artificial reefs were often successful at attracting recreational species and spawning fish; however, population-level benefits of artificial reefs are unclear. Stressors such as sedimentation and bio-fouling can limit the effectiveness of artificial reefs as spawning enhancement tools. Our investigation underscores the need to develop standard protocols for monitoring the biological and physical attributes of artificial structures. Further, long-term monitoring is needed to assess the benefits of artificial reefs to fish populations and inform future artificial reef projects.

  15. Concepts for improving the accuracy of gas balance measurement at ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Härtl, T., E-mail: thomas.haertl@ipp.mpg.de; Rohde, V.; Mertens, V.

    2013-10-15

    The ITER fusion reactor which is under construction will use a deuterium–tritium gas mixture for operation. A fraction of this fusion fuel remains inside of the machine due to various mechanisms. The evaluation of this retention in present fusion experiments is of crucial importance to estimate the expected tritium inventory in ITER which shall be limited due to safety considerations. At ASDEX Upgrade (AUG) sufficiently time-resolved measurements should take place to extrapolate from current 10 s discharges to the at least intended 400 s ones of ITER. To achieve this, a new measurement system has been designed that enables accuracy of better than one per cent.

  16. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  18. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  19. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  20. Staging cancer of the uterus: A national audit of MRI accuracy

    International Nuclear Information System (INIS)

    Duncan, K.A.; Drinkwater, K.J.; Frost, C.; Remedios, D.; Barter, S.

    2012-01-01

    Aim: To report the results of a nationwide audit of the accuracy of magnetic resonance imaging (MRI) staging in uterine body cancer when staging myometrial invasion, cervical extension, and lymph node spread. Materials and methods: All UK radiology departments were invited to participate using a web-based tool for submitting anonymized data for a 12 month period. MRI staging was compared with histopathological staging using target accuracies of 85, 86, and 70% respectively. Results: Of the departments performing MRI staging of endometrial cancer, 37/87 departments contributed. Targets for MRI staging were achieved for two of the three standards nationally with diagnostic accuracy for depth of myometrial invasion, 82%; for cervical extension, 90%; and for pelvic nodal involvement, 94%; the latter two being well above the targets. However, only 13/37 (35%) of individual centres met the target for assessing depth of myometrial invasion, 31/36 (86%) for cervical extension and 31/34 (91%) for pelvic nodal involvement. Statistical analysis demonstrated no significant difference for the use of intravenous contrast medium, but did show some evidence of increasing accuracy in assessment of depth of myometrial invasion with increasing caseload. Conclusion: Overall performance in the UK was good, with only the target for assessment of depth of myometrial invasion not being met. Inter-departmental variation was seen. One factor that may improve performance in assessment of myometrial invasion is a higher caseload. No other clear factor to improve performance were identified.