WorldWideScience

Sample records for samples improves accuracy

  1. Use the Bar Code System to Improve Accuracy of the Patient and Sample Identification.

    Chuang, Shu-Hsia; Yeh, Huy-Pzu; Chi, Kun-Hung; Ku, Hsueh-Chen

    2018-01-01

    In time and correct sample collection were highly related to patient's safety. The sample error rate was 11.1%, because misbranded patient information and wrong sample containers during January to April, 2016. We developed a barcode system of "Specimens Identify System" through process of reengineering of TRM, used bar code scanners, add sample container instructions, and mobile APP. Conclusion, the bar code systems improved the patient safety and created green environment.

  2. Detecting representative data and generating synthetic samples to improve learning accuracy with imbalanced data sets.

    Der-Chiang Li

    Full Text Available It is difficult for learning models to achieve high classification performances with imbalanced data sets, because with imbalanced data sets, when one of the classes is much larger than the others, most machine learning and data mining classifiers are overly influenced by the larger classes and ignore the smaller ones. As a result, the classification algorithms often have poor learning performances due to slow convergence in the smaller classes. To balance such data sets, this paper presents a strategy that involves reducing the sizes of the majority data and generating synthetic samples for the minority data. In the reducing operation, we use the box-and-whisker plot approach to exclude outliers and the Mega-Trend-Diffusion method to find representative data from the majority data. To generate the synthetic samples, we propose a counterintuitive hypothesis to find the distributed shape of the minority data, and then produce samples according to this distribution. Four real datasets were used to examine the performance of the proposed approach. We used paired t-tests to compare the Accuracy, G-mean, and F-measure scores of the proposed data pre-processing (PPDP method merging in the D3C method (PPDP+D3C with those of the one-sided selection (OSS, the well-known SMOTEBoost (SB study, and the normal distribution-based oversampling (NDO approach, and the proposed data pre-processing (PPDP method. The results indicate that the classification performance of the proposed approach is better than that of above-mentioned methods.

  3. Apparent density measurement by mercury pycnometry. Improved accuracy. Simplification of handling for possible application to irradiated samples

    Marlet, Bernard

    1978-12-01

    The accuracy of the apparent density measurement on massive samples of any geometrical shape has been improved and the method simplified. A standard deviation of +-1 to 5.10 -3 g.ml -1 according to the size and surface state of the sample, was obtained by the use of a flat ground stopper on a mercury pycnometer which fills itself under vacuum. This method saves considerable time and has been adapted to work in shielded cells for the measurement of radioactive materials, especially sintered uranium dioxide leaving the pile. The different parameters are analysed and criticized [fr

  4. Improving the Accuracy of the Hyperspectral Model for Apple Canopy Water Content Prediction using the Equidistant Sampling Method.

    Zhao, Huan-San; Zhu, Xi-Cun; Li, Cheng; Wei, Yu; Zhao, Geng-Xing; Jiang, Yuan-Mao

    2017-09-11

    The influence of the equidistant sampling method was explored in a hyperspectral model for the accurate prediction of the water content of apple tree canopy. The relationship between spectral reflectance and water content was explored using the sample partition methods of equidistant sampling and random sampling, and a stepwise regression model of the apple canopy water content was established. The results showed that the random sampling model was Y = 0.4797 - 721787.3883 × Z 3 - 766567.1103 × Z 5 - 771392.9030 × Z 6 ; the equidistant sampling model was Y = 0.4613 - 480610.4213 × Z 2 - 552189.0450 × Z 5 - 1006181.8358 × Z 6 . After verification, the equidistant sampling method was verified to offer a superior prediction ability. The calibration set coefficient of determination of 0.6599 and validation set coefficient of determination of 0.8221 were higher than that of the random sampling model by 9.20% and 10.90%, respectively. The root mean square error (RMSE) of 0.0365 and relative error (RE) of 0.0626 were lower than that of the random sampling model by 17.23% and 17.09%, respectively. Dividing the calibration set and validation set by the equidistant sampling method can improve the prediction accuracy of the hyperspectral model of apple canopy water content.

  5. Improving shuffler assay accuracy

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  6. Cadastral Database Positional Accuracy Improvement

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  7. Smart Air Sampling Instruments Have the Ability to Improve the Accuracy of Air Monitoring Data Comparisons Among Nuclear Industry Facilities

    Gavila, F. M.

    2008-01-01

    Valid inter-comparisons of operating performance parameters among all members of the nuclear industry are essential for the implementation of continuous improvement and for obtaining credibility among regulators and the general public. It is imperative that the comparison of performances among different industry facilities be as accurate as possible and normalized to industry-accepted reference standards

  8. Application of bias factor method using random sampling technique for prediction accuracy improvement of critical eigenvalue of BWR

    Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi

    2017-01-01

    The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)

  9. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Improving Accuracy of Processing Through Active Control

    N. N. Barbashov

    2016-01-01

    Full Text Available An important task of modern mathematical statistics with its methods based on the theory of probability is a scientific estimate of measurement results. There are certain costs under control, and under ineffective control when a customer has got defective products these costs are significantly higher because of parts recall.When machining the parts, under the influence of errors a range scatter of part dimensions is offset towards the tolerance limit. To improve a processing accuracy and avoid defective products involves reducing components of error in machining, i.e. to improve the accuracy of machine and tool, tool life, rigidity of the system, accuracy of the adjustment. In a given time it is also necessary to adapt machine.To improve an accuracy and a machining rate there, currently  become extensively popular various the in-process gaging devices and controlled machining that uses adaptive control systems for the process monitoring. Improving the accuracy in this case is compensation of a majority of technological errors. The in-cycle measuring sensors (sensors of active control allow processing accuracy improvement by one or two quality and provide a capability for simultaneous operation of several machines.Efficient use of in-cycle measuring sensors requires development of methods to control the accuracy through providing the appropriate adjustments. Methods based on the moving average, appear to be the most promising for accuracy control since they include data on the change in some last measured values of the parameter under control.

  11. Accuracy of sampling during mushroom cultivation

    Baars, J.J.P.; Hendrickx, P.M.; Sonnenberg, A.S.M.

    2015-01-01

    Experiments described in this report were performed to increase the accuracy of the analysis of the biological efficiency of Agaricus bisporus strains. Biological efficiency is a measure of the efficiency with which the mushroom strains use dry matter in the compost to produce mushrooms (expressed

  12. Improving coding accuracy in an academic practice.

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  13. Enhancement of the spectral selectivity of complex samples by measuring them in a frozen state at low temperatures in order to improve accuracy for quantitative analysis. Part II. Determination of viscosity for lube base oils using Raman spectroscopy.

    Kim, Mooeung; Chung, Hoeil

    2013-03-07

    The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.

  14. Audiovisual biofeedback improves motion prediction accuracy.

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-04-01

    The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients' respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p biofeedback improves prediction accuracy. This would result in increased efficiency of motion management techniques affected by system latencies used in radiotherapy.

  15. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  16. Improving the accuracy of dynamic mass calculation

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  17. Two-step multiplex polymerase chain reaction improves the speed and accuracy of genotyping using DNA from noninvasive and museum samples.

    Arandjelovic, M; Guschanski, K; Schubert, G; Harris, T R; Thalmann, O; Siedel, H; Vigilant, L

    2009-01-01

    Many studies in molecular ecology rely upon the genotyping of large numbers of low-quantity DNA extracts derived from noninvasive or museum specimens. To overcome low amplification success rates and avoid genotyping errors such as allelic dropout and false alleles, multiple polymerase chain reaction (PCR) replicates for each sample are typically used. Recently, two-step multiplex procedures have been introduced which drastically increase the success rate and efficiency of genotyping. However, controversy still exists concerning the amount of replication needed for suitable control of error. Here we describe the use of a two-step multiplex PCR procedure that allows rapid genotyping using at least 19 different microsatellite loci. We applied this approach to quantified amounts of noninvasive DNAs from western chimpanzee, western gorilla, mountain gorilla and black and white colobus faecal samples, as well as to DNA from ~100-year-old gorilla teeth from museums. Analysis of over 45 000 PCRs revealed average success rates of > 90% using faecal DNAs and 74% using museum specimen DNAs. Average allelic dropout rates were substantially reduced compared to those obtained using conventional singleplex PCR protocols, and reliable genotyping using low (< 25 pg) amounts of template DNA was possible. However, four to five replicates of apparently homozygous results are needed to avoid allelic dropout when using the lowest concentration DNAs (< 50 pg/reaction), suggesting that use of protocols allowing routine acceptance of homozygous genotypes after as few as three replicates may lead to unanticipated errors when applied to low-concentration DNAs. © 2008 The Authors. Journal compilation © 2008 Blackwell Publishing Ltd.

  18. Can Translation Improve EFL Students' Grammatical Accuracy? [

    Carol Ebbert-Hübner

    2018-01-01

    Full Text Available This report focuses on research results from a project completed at Trier University in December 2015 that provides insight into whether a monolingual group of learners can improve their grammatical accuracy and reduce interference mistakes in their English via contrastive analysis and translation instruction and activities. Contrastive analysis and translation (CAT instruction in this setting focusses on comparing grammatical differences between students’ dominant language (German and English, and practice activities where sentences or short texts are translated from German into English. The results of a pre- and post-test administered in the first and final week of a translation class were compared to two other class types: a grammar class which consisted of form-focused instruction but not translation, and a process-approach essay writing class where students received feedback on their written work throughout the semester. The results of our study indicate that with C1 level EAP students, more improvement in grammatical accuracy is seen through teaching with CAT than in explicit grammar instruction or through language feedback on written work alone. These results indicate that CAT does indeed have a place in modern language classes.

  19. Improving sample recovery

    Blanchard, R.J.

    1995-09-01

    This Engineering Task Plan (ETP) describes the tasks, i.e., tests, studies, external support and modifications planned to increase the recovery of the recovery of the waste tank contents using combinations of improved techniques, equipment, knowledge, experience and testing to better the recovery rates presently being experienced

  20. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  1. Improving treatment planning accuracy through multimodality imaging

    Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun

    1996-01-01

    the patient's initial fields and boost, respectively. Case illustrations are shown. Conclusions: We have successfully integrated multimodality imaging into our treatment-planning system, and its routine use is increasing. Multimodality imaging holds out the promise of improving treatment planning accuracy and, thus, takes maximum advantage of three dimensional treatment planning systems.

  2. Content in Context Improves Deception Detection Accuracy

    Blair, J. Pete; Levine, Timothy R.; Shaw, Allison S.

    2010-01-01

    Past research has shown that people are only slightly better than chance at distinguishing truths from lies. Higher accuracy rates, however, are possible when contextual knowledge is used to judge the veracity of situated message content. The utility of content in context was shown in a series of experiments with students (N = 26, 45, 51, 25, 127)…

  3. An Investigation to Improve Classifier Accuracy for Myo Collected Data

    2017-02-01

    Bad Samples Effect on Classification Accuracy 7 5.1 Naïve Bayes (NB) Classifier Accuracy 7 5.2 Logistic Model Tree (LMT) 10 5.3 K-Nearest Neighbor...gesture, pitch feature, user 06. All samples exhibit reversed movement...20 Fig. A-2 Come gesture, pitch feature, user 14. All samples exhibit reversed movement

  4. Improving calibration accuracy in gel dosimetry

    Oldham, M.; McJury, M.; Webb, S.; Baustert, I.B.; Leach, M.O.

    1998-01-01

    A new method of calibrating gel dosimeters (applicable to both Fricke and polyacrylamide gels) is presented which has intrinsically higher accuracy than current methods, and requires less gel. Two test-tubes of gel (inner diameter 2.5 cm, length 20 cm) are irradiated separately with a 10x10cm 2 field end-on in a water bath, such that the characteristic depth-dose curve is recorded in the gel. The calibration is then determined by fitting the depth-dose measured in water, against the measured change in relaxivity with depth in the gel. Increased accuracy is achieved in this simple depth-dose geometry by averaging the relaxivity at each depth. A large number of calibration data points, each with relatively high accuracy, are obtained. Calibration data over the full range of dose (1.6-10 Gy) is obtained by irradiating one test-tube to 10 Gy at dose maximum (D max ), and the other to 4.5 Gy at D max . The new calibration method is compared with a 'standard method' where five identical test-tubes of gel were irradiated to different known doses between 2 and 10 Gy. The percentage uncertainties in the slope and intercept of the calibration fit are found to be lower with the new method by a factor of about 4 and 10 respectively, when compared with the standard method and with published values. The gel was found to respond linearly within the error bars up to doses of 7 Gy, with a slope of 0.233±0.001 s -1 Gy -1 and an intercept of 1.106±0.005 Gy. For higher doses, nonlinear behaviour was observed. (author)

  5. Assessment of Sr-90 in water samples: precision and accuracy

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P., E-mail: mbnisti@ipen.br, E-mail: chsaueia@ipen.br, E-mail: bcastilho@ipen.br, E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The {sup 90}Sr was one of the elements released into the environment. The {sup 90}Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of {sup 90}Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using {sup 90}Sr standard solutions. (author)

  6. Assessment of Sr-90 in water samples: precision and accuracy

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P.

    2017-01-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The 90 Sr was one of the elements released into the environment. The 90 Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of 90 Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using 90 Sr standard solutions. (author)

  7. Improving the accuracy of walking piezo motors.

    den Heijer, M; Fokkema, V; Saedi, A; Schakel, P; Rost, M J

    2014-05-01

    Many application areas require ultraprecise, stiff, and compact actuator systems with a high positioning resolution in combination with a large range as well as a high holding and pushing force. One promising solution to meet these conflicting requirements is a walking piezo motor that works with two pairs of piezo elements such that the movement is taken over by one pair, once the other pair reaches its maximum travel distance. A resolution in the pm-range can be achieved, if operating the motor within the travel range of one piezo pair. However, applying the typical walking drive signals, we measure jumps in the displacement up to 2.4 μm, when the movement is given over from one piezo pair to the other. We analyze the reason for these large jumps and propose improved drive signals. The implementation of our new drive signals reduces the jumps to less than 42 nm and makes the motor ideally suitable to operate as a coarse approach motor in an ultra-high vacuum scanning tunneling microscope. The rigidity of the motor is reflected in its high pushing force of 6.4 N.

  8. Does a Structured Data Collection Form Improve The Accuracy of ...

    and multiple etiologies for similar presentation. Standardized forms may harmonize the initial assessment, improve accuracy of diagnosis and enhance outcomes. Objectives: To determine the extent to which use of a structured data collection form (SDCF) affected the diagnostic accuracy of AAP. Methodology: A before and ...

  9. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  10. Accuracy and Effort of Interpolation and Sampling: Can GIS Help Lower Field Costs?

    Greg Simpson

    2014-12-01

    Full Text Available Sedimentation is a problem for all reservoirs in the Black Hills of South Dakota. Before working on sediment removal, a survey on the extent and distribution of the sediment is needed. Two sample lakes were used to determine which of three interpolation methods gave the most accurate volume results. A secondary goal was to see if fewer samples could be taken while still providing similar results. The smaller samples would mean less field time and thus lower costs. Subsamples of 50%, 33% and 25% were taken from the total samples and evaluated for the lowest Root Mean Squared Error values. Throughout the trials, the larger sample sizes generally showed better accuracy than smaller samples. Graphing the sediment volume estimates of the full sample, 50%, 33% and 25% showed little improvement after a sample of approximately 40%–50% when comparing the asymptote of the separate samples. When we used smaller subsamples the predicted sediment volumes were normally greater than the full sample volumes. It is suggested that when planning future sediment surveys, workers plan on gathering data at approximately every 5.21 meters. These sample sizes can be cut in half and still retain relative accuracy if time savings are needed. Volume estimates may slightly suffer with these reduced samples sizes, but the field work savings can be of benefit. Results from these surveys are used in prioritization of available funds for reclamation efforts.

  11. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  12. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  13. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  14. Efficiency and accuracy of Monte Carlo (importance) sampling

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  15. Improving Accuracy of Processing by Adaptive Control Techniques

    N. N. Barbashov

    2016-01-01

    Full Text Available When machining the work-pieces a range of scatter of the work-piece dimensions to the tolerance limit is displaced in response to the errors. To improve an accuracy of machining and prevent products from defects it is necessary to diminish the machining error components, i.e. to improve the accuracy of machine tool, tool life, rigidity of the system, accuracy of adjustment. It is also necessary to provide on-machine adjustment after a certain time. However, increasing number of readjustments reduces the performance and high machine and tool requirements lead to a significant increase in the machining cost.To improve the accuracy and machining rate, various devices of active control (in-process gaging devices, as well as controlled machining through adaptive systems for a technological process control now become widely used. Thus, the accuracy improvement in this case is reached by compensation of a majority of technological errors. The sensors of active control can provide improving the accuracy of processing by one or two quality classes, and simultaneous operation of several machines.For efficient use of sensors of active control it is necessary to develop the accuracy control methods by means of introducing the appropriate adjustments to solve this problem. Methods based on the moving average, appear to be the most promising for accuracy control, since they contain information on the change in the last several measured values of the parameter under control.When using the proposed method in calculation, the first three members of the sequence of deviations remain unchanged, therefore 1 1 x  x , 2 2 x  x , 3 3 x  x Then, for each i-th member of the sequence we calculate that way: , ' i i i x  x  k x , where instead of the i x values will be populated with the corresponding values ' i x calculated as an average of three previous members:3 ' 1  2  3  i i i i x x x x .As a criterion for the estimate of the control

  16. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  17. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  18. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  19. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  20. Algorithms and parameters for improved accuracy in physics data libraries

    Batič, M; Hoff, G; Pia, M G; Saracco, P; Han, M; Kim, C H; Hauf, S; Kuster, M; Seo, H

    2012-01-01

    Recent efforts for the improvement of the accuracy of physics data libraries used in particle transport are summarized. Results are reported about a large scale validation analysis of atomic parameters used by major Monte Carlo systems (Geant4, EGS, MCNP, Penelope etc.); their contribution to the accuracy of simulation observables is documented. The results of this study motivated the development of a new atomic data management software package, which optimizes the provision of state-of-the-art atomic parameters to physics models. The effect of atomic parameters on the simulation of radioactive decay is illustrated. Ideas and methods to deal with physics models applicable to different energy ranges in the production of data libraries, rather than at runtime, are discussed.

  1. Three-dimensional display improves observer speed and accuracy

    Nelson, J.A.; Rowberg, A.H.; Kuyper, S.; Choi, H.S.

    1989-01-01

    In an effort to evaluate the potential cost-effectiveness of three-dimensional (3D) display equipment, we compared the speed and accuracy of experienced radiologists identifying in sliced uppercase letters from CT scans with 2D and pseudo-3D display. CT scans of six capital letters were obtained and printed as a 2D display or as a synthesized pseudo-3D display (Pixar). Six observes performed a timed identification task. Radiologists read the 3D display an average of 16 times faster than the 2D, and the average error rate of 2/6 (± 0.6/6) for 2D interpretations was totally eliminated. This degree of improvement in speed and accuracy suggests that the expense of 3D display may be cost-effective in a defined clinical setting

  2. How social information can improve estimation accuracy in human groups.

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  3. Impact Of Tissue Sampling On Accuracy Of Ki67 Immunohistochemistry Evaluation In Breast Cancer

    Justinas Besusparis

    2016-06-01

    The sampling requirements were dependent on the heterogeneity of the biomarker expression. To achieve a coefficient error of 10%, 5-6 cores were needed for homogeneous cases, while 11-12 cores for heterogeneous cases. In mixed tumor population, 8 TMA cores were required. Similarly, to achieve the same accuracy, approximately 4,000 nuclei must be counted when the intra-tumor heterogeneity is mixed/unknown. Tumors at the lower scale of proliferative activity would require larger sampling (10-12 TMA cores, or 5,000 nuclei to achieve the same error measurement results as for highly proliferative tumors. Our data show that optimal tissue sampling for IHC biomarker evaluation is dependent on the heterogeneity of the tissue under study and needs to be determined on a per-use basis. We propose a method that can be applied to determine the TMA sampling strategy for specific biomarkers, tissues and study targets. In addition, our findings highlight the importance of high-capacity computer-based IHC measurement techniques to improve accuracy of the testing.

  4. An improved selective sampling method

    Miyahara, Hiroshi; Iida, Nobuyuki; Watanabe, Tamaki

    1986-01-01

    The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective sampling method using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

  5. How patients can improve the accuracy of their medical records.

    Dullabh, Prashila M; Sondheimer, Norman K; Katsh, Ethan; Evans, Michael A

    2014-01-01

    Assess (1) if patients can improve their medical records' accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback's impact on medical record accuracy. Improving medical record' accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records' accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists' accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists' medication reconciliation logs. (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency-or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms, pharmacists responded positively to 68 percent of patient requests for

  6. Improving orbit prediction accuracy through supervised machine learning

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  7. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  8. Precision and Accuracy of k0-NAA Method for Analysis of Multi Elements in Reference Samples

    Sri-Wardani

    2004-01-01

    Accuracy and precision of k 0 -NAA method could determine in the analysis of multi elements contained in reference samples. The analyzed results of multi elements in SRM 1633b sample were obtained with optimum results in bias of 20% but it is in a good accuracy and precision. The analyzed results of As, Cd and Zn in CCQM-P29 rice flour sample were obtained with very good result in bias of 0.5 - 5.6%. (author)

  9. Improving the Accuracy of Cloud Detection Using Machine Learning

    Craddock, M. E.; Alliss, R. J.; Mason, M.

    2017-12-01

    show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.

  10. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  11. CADASTRAL POSITIONING ACCURACY IMPROVEMENT: A CASE STUDY IN MALAYSIA

    N. M. Hashim

    2016-09-01

    Full Text Available Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM. With the growth of spatial based technology especially Geographical Information System (GIS, DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI in cadastral database modernization.

  12. Selecting fillers on emotional appearance improves lineup identification accuracy.

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  14. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  15. Improving Odometric Accuracy for an Autonomous Electric Cart

    Jonay Toledo

    2018-01-01

    Full Text Available In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  16. Improving Odometric Accuracy for an Autonomous Electric Cart.

    Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo

    2018-01-12

    In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  17. Improvement of vision measurement accuracy using Zernike moment based edge location error compensation model

    Cui, J W; Tan, J B; Zhou, Y; Zhang, H

    2007-01-01

    This paper presents the Zernike moment based model developed to compensate edge location errors for further improvement of the vision measurement accuracy by compensating the slight changes resulting from sampling and establishing mathematic expressions for subpixel location of theoretical and actual edges which are either vertical to or at an angle with X-axis. Experimental results show that the proposed model can be used to achieve a vision measurement accuracy of up to 0.08 pixel while the measurement uncertainty is less than 0.36μm. It is therefore concluded that as a model which can be used to achieve a significant improvement of vision measurement accuracy, the proposed model is especially suitable for edge location of images with low contrast

  18. Modern methods to improve the accuracy in fast neutron dosimetry

    Baers, B.; Karnani, H.; Seren, T.

    1985-01-01

    In order to improve the quality of fast neutron dose estimates at the reactor pressure vessel (PV) some modern methods are presented. In addition to basic principles, some error reduction procedures are also presented based on the combined use of relative measurements, direct sample taking from the pressure vessel and the use of iron and niobium as dosimeters. The influence of large systematic errors could be significantly reduced by carrying out relative measurements. This report also presents the successful use of niobium as a dosimeter by destructive treatment of PV samples. (author)

  19. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  20. The effect of sampling frequency on the accuracy of estimates of milk ...

    The results of this study support the five-weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional bulking of individual morning and evening samples with a single evening milk sample would not compromise accuracy provided ...

  1. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    Garmay, Andrey V., E-mail: andrew-garmay@yandex.ru; Oskolok, Kirill V.

    2017-04-15

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  2. Improvement of fuel sampling device for STACY and TRACY

    Hirose, Hideyuki; Sakuraba, Koichi; Onodera, Seiji

    1998-05-01

    STACY and TRACY, static and transient experiment facilities in NUCEF, use solution fuel. It is important to analyze accurately fuel composition (uranium enrichment, uranium concentration, nitric acid morality, amount of impurities, radioactivity of FP) for their safety operation and improvement of experimental accuracy. Both STACY and TRACY have the sampling devices to sample fuel solution for that purpose. The previous sampling devices of STACY and TRACY had been designed to dilute fuel sample with nitric acid. Its sampling mechanism could pour fuel sample into sampling vessel by a piston drive of nitric acid in the burette. It was, however, sometimes found that sample fuel solution was diluted by mixing with nitric acid in the burette. Therefore, the sampling mechanism was change into a fixed quantity pump drive which didn't use nitric acid. The authors confirmed that the performance of the new sampling device was improved by changing sampling mechanism. It was confirmed through the function test that the uncertainty in uranium concentration measurement using the improved sampling device was 0.14%, and less than the designed value of 0.2% (coefficient of variation). (author)

  3. Judgment sampling: a health care improvement perspective.

    Perla, Rocco J; Provost, Lloyd P

    2012-01-01

    Sampling plays a major role in quality improvement work. Random sampling (assumed by most traditional statistical methods) is the exception in improvement situations. In most cases, some type of "judgment sample" is used to collect data from a system. Unfortunately, judgment sampling is not well understood. Judgment sampling relies upon those with process and subject matter knowledge to select useful samples for learning about process performance and the impact of changes over time. It many cases, where the goal is to learn about or improve a specific process or system, judgment samples are not merely the most convenient and economical approach, they are technically and conceptually the most appropriate approach. This is because improvement work is done in the real world in complex situations involving specific areas of concern and focus; in these situations, the assumptions of classical measurement theory neither can be met nor should an attempt be made to meet them. The purpose of this article is to describe judgment sampling and its importance in quality improvement work and studies with a focus on health care settings.

  4. Forecasting space weather: Can new econometric methods improve accuracy?

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  5. Improving accuracy of breast cancer biomarker testing in India

    Tanuja Shet

    2017-01-01

    Full Text Available There is a global mandate even in countries with low resources to improve the accuracy of testing biomarkers in breast cancer viz. oestrogen receptor (ER, progesterone receptor (PR and human epidermal growth factor receptor 2 (HER2neu given their critical impact in the management of patients. The steps taken include compulsory participation in an external quality assurance (EQA programme, centralized testing, and regular performance audits for laboratories. This review addresses the status of ER/PR and HER2neu testing in India and possible reasons for the delay in development of guidelines and mandate for testing in the country. The chief cause of erroneous ER and PR testing in India continues to be easily correctable issues such as fixation and antigen retrieval, while for HER2neu testing, it is the use of low-cost non-validated antibodies and interpretative errors. These deficiencies can however, be rectified by (i distributing the accountability and responsibility to surgeons and oncologist, (ii certification of centres for testing in oncology, and (iii initiation of a national EQA system (EQAS programme that will help with economical solutions and identifying the centres of excellence and instill a system for reprimand of poorly performing laboratories.

  6. Stratified computed tomography findings improve diagnostic accuracy for appendicitis

    Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June

    2014-01-01

    AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P appendicitis with equivocal CT findings. PMID:25320531

  7. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  8. A Improved Seabed Surface Sand Sampling Device

    Luo, X.

    2017-12-01

    In marine geology research it is necessary to obtain a suf fcient quantity of seabed surface samples, while also en- suring that the samples are in their original state. Currently,there are a number of seabed surface sampling devices available, but we fnd it is very diffcult to obtain sand samples using these devices, particularly when dealing with fne sand. Machine-controlled seabed surface sampling devices are also available, but generally unable to dive into deeper regions of water. To obtain larger quantities of seabed surface sand samples in their original states, many researchers have tried to improve upon sampling devices,but these efforts have generally produced ambiguous results, in our opinion.To resolve this issue, we have designed an improved andhighly effective seabed surface sand sampling device that incorporates the strengths of a variety of sampling devices. It is capable of diving into deepwater to obtain fne sand samples and is also suited for use in streams, rivers, lakes and seas with varying levels of depth (up to 100 m). This device can be used for geological mapping, underwater prospecting, geological engineering and ecological, environmental studies in both marine and terrestrial waters.

  9. Error analysis to improve the speech recognition accuracy on ...

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  10. Statewide Quality Improvement Initiative to Reduce Early Elective Deliveries and Improve Birth Registry Accuracy.

    Kaplan, Heather C; King, Eileen; White, Beth E; Ford, Susan E; Fuller, Sandra; Krew, Michael A; Marcotte, Michael P; Iams, Jay D; Bailit, Jennifer L; Bouchard, Jo M; Friar, Kelly; Lannon, Carole M

    2018-04-01

    To evaluate the success of a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data accuracy rapidly and at scale in Ohio. Between February 2013 and March 2014, participating hospitals were involved in a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data. This initiative was designed as a learning collaborative model (group webinars and a single face-to-face meeting) and included individual quality improvement coaching. It was implemented using a stepped wedge design with hospitals divided into three balanced groups (waves) participating in the initiative sequentially. Birth registry data were used to assess hospital rates of nonmedically indicated inductions at less than 39 weeks of gestation. Comparisons were made between groups participating and those not participating in the initiative at two time points. To measure birth registry accuracy, hospitals conducted monthly audits comparing birth registry data with the medical record. Associations were assessed using generalized linear repeated measures models accounting for time effects. Seventy of 72 (97%) eligible hospitals participated. Based on birth registry data, nonmedically indicated inductions at less than 39 weeks of gestation declined in all groups with implementation (wave 1: 6.2-3.2%, Pinitiative, they saw significant decreases in rates of early elective deliveries as compared with wave 3 (control; P=.018). All waves had significant improvement in birth registry accuracy (wave 1: 80-90%, P=.017; wave 2: 80-100%, P=.002; wave 3: 75-100%, Pinitiative enabled statewide spread of change strategies to decrease early elective deliveries and improve birth registry accuracy over 14 months and could be used for rapid dissemination of other evidence-based obstetric care practices across states or hospital systems.

  11. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  12. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  13. Significant improvement of accuracy and precision in the determination of trace rare earths by fluorescence analysis

    Ozawa, L.; Hersh, H.N.

    1976-01-01

    Most of the rare earths in yttrium, gadolinium and lanthanum oxides emit characteristic fluorescent line spectra under irradiation with photons, electrons and x rays. The sensitivity and selectivity of the rare earth fluorescences are high enough to determine the trace amounts (0.01 to 100 ppM) of rare earths. The absolute fluorescent intensities of solids, however, are markedly affected by the synthesis procedure, level of contamination and crystal perfection, resulting in poor accuracy and low precision for the method (larger than 50 percent error). Special care in preparation of the samples is required to obtain good accuracy and precision. It is found that the accuracy and precision for the determination of trace (less than 10 ppM) rare earths by fluorescence analysis improved significantly, while still maintaining the sensitivity, when the determination is made by comparing the ratio of the fluorescent intensities of the trace rare earths to that of a deliberately added rare earth as reference. The variation in the absolute fluorescent intensity remains, but is compensated for by measuring the fluorescent line intensity ratio. Consequently, the determination of trace rare earths (with less than 3 percent error) is easily made by a photoluminescence technique in which the rare earths are excited directly by photons. Accuracy is still maintained when the absolute fluorescent intensity is reduced by 50 percent through contamination by Ni, Fe, Mn or Pb (about 100 ppM). Determination accuracy is also improved for fluorescence analysis by electron excitation and x-ray excitation. For some rare earths, however, accuracy by these techniques is reduced because indirect excitation mechanisms are involved. The excitation mechanisms and the interferences between rare earths are also reported

  14. Antibiotic Resistome: Improving Detection and Quantification Accuracy for Comparative Metagenomics.

    Elbehery, Ali H A; Aziz, Ramy K; Siam, Rania

    2016-04-01

    The unprecedented rise of life-threatening antibiotic resistance (AR), combined with the unparalleled advances in DNA sequencing of genomes and metagenomes, has pushed the need for in silico detection of the resistance potential of clinical and environmental metagenomic samples through the quantification of AR genes (i.e., genes conferring antibiotic resistance). Therefore, determining an optimal methodology to quantitatively and accurately assess AR genes in a given environment is pivotal. Here, we optimized and improved existing AR detection methodologies from metagenomic datasets to properly consider AR-generating mutations in antibiotic target genes. Through comparative metagenomic analysis of previously published AR gene abundance in three publicly available metagenomes, we illustrate how mutation-generated resistance genes are either falsely assigned or neglected, which alters the detection and quantitation of the antibiotic resistome. In addition, we inspected factors influencing the outcome of AR gene quantification using metagenome simulation experiments, and identified that genome size, AR gene length, total number of metagenomics reads and selected sequencing platforms had pronounced effects on the level of detected AR. In conclusion, our proposed improvements in the current methodologies for accurate AR detection and resistome assessment show reliable results when tested on real and simulated metagenomic datasets.

  15. Diagnostic accuracy of language sample measures with Persian-speaking preschool children.

    Kazemi, Yalda; Klee, Thomas; Stringer, Helen

    2015-04-01

    This study examined the diagnostic accuracy of selected language sample measures (LSMs) with Persian-speaking children. A pre-accuracy study followed by phase I and II studies are reported. Twenty-four Persian-speaking children, aged 42 to 54 months, with primary language impairment (PLI) were compared to 27 age-matched children without PLI on a set of measures derived from play-based, conversational language samples. Results showed that correlations between age and LSMs were not statistically significant in either group of children. However, a majority of LSMs differentiated children with and without PLI at the group level (phase I), while three of the measures exhibited good diagnostic accuracy at the level of the individual (phase II). We conclude that general LSMs are promising for distinguishing between children with and without PLI. Persian-specific measures are mainly helpful in identifying children without language impairment while their ability to identify children with PLI is poor.

  16. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  17. Improving blood sample logistics using simulation

    Jørgensen, Pelle Morten Thomas; Jacobsen, Peter

    2012-01-01

    Using simulation as an approach to display and improve internal logistics and handling at hospitals has great potential. This research will show how a simulation model can be used to evaluate changes made to two different cases of transportation of blood samples at a hospital, by evaluating...

  18. Machine learning improves the accuracy of myocardial perfusion scintigraphy results

    Groselj, C.; Kukar, M.

    2002-01-01

    Objective: Machine learning (ML) an artificial intelligence method has in last decade proved to be an useful tool in many fields of decision making, also in some fields of medicine. By reports, its decision accuracy usually exceeds the human one. Aim: To assess applicability of ML in interpretation of the stress myocardial perfusion scintigraphy results in coronary artery disease diagnostic process. Patients and methods: The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy of the investigation were computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate with whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy of scintigraphy were expressed in this way. The results of both decision procedures were compared. Conclusion: Using ML method, 19 more patients out of 327 (5.8%) were correctly diagnosed by stress myocardial perfusion scintigraphy. In this way ML could be an important tool for myocardial perfusion scintigraphy decision making

  19. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  20. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  1. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  2. Preoperative Measurement of Tibial Resection in Total Knee Arthroplasty Improves Accuracy of Postoperative Limb Alignment Restoration

    Pei-Hui Wu

    2016-01-01

    Conclusions: Using conventional surgical instruments, preoperative measurement of resection thickness of the tibial plateau on radiographs could improve the accuracy of conventional surgical techniques.

  3. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  4. A simple method to improve the quantification accuracy of energy-dispersive X-ray microanalysis

    Walther, T

    2008-01-01

    Energy-dispersive X-ray spectroscopy in a transmission electron microscope is a standard tool for chemical microanalysis and routinely provides qualitative information on the presence of all major elements above Z=5 (boron) in a sample. Spectrum quantification relies on suitable corrections for absorption and fluorescence, in particular for thick samples and soft X-rays. A brief presentation is given of an easy way to improve quantification accuracy by evaluating the intensity ratio of two measurements acquired at different detector take-off angles. As the take-off angle determines the effective sample thickness seen by the detector this method corresponds to taking two measurements from the same position at two different thicknesses, which allows to correct absorption and fluorescence more reliably. An analytical solution for determining the depth of a feature embedded in the specimen foil is also provided.

  5. Improvement of CD-SEM mark position measurement accuracy

    Kasa, Kentaro; Fukuhara, Kazuya

    2014-04-01

    CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.

  6. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  7. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  8. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  9. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  10. Decision aids for improved accuracy and standardization of mammographic diagnosis

    D'Orsi, C.J.; Getty, D.J.; Swets, J.A.; Pickett, R.M.; Seltzer, S.E.; McNeil, B.J.

    1990-01-01

    This paper examines the gains in the accuracy of mammographic diagnosis of breast cancer achievable from a pair of decision aids. Twenty-three potentially relevant perceptual features of mammograms were identified through interviews, psychometric tests, and consensus meetings with mammography specialists. Statistical analyses determined the 12 independent features that were most information diagnostically and assigned a weight to each according to its importance. Two decision aids were developed: a checklist that solicits a scale value from the radiologist for each feature and a computer program that merges those values optimally in an advisory estimate of the probability of malignancy. Six radiologists read a set of 150 cases, first in their usual way and later with the aids

  11. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  12. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  13. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  14. Quality control on the accuracy of the total Beta activity index in different sample matrices water

    Pujol, L.; Pablo, M. A. de; Payeras, J.

    2013-01-01

    The standard ISO/IEC 17025:2005 of general requirements for the technical competence of testing and calibration laboratories, provides that a laboratory shall have quality control procedures for monitoring the validity of tests and calibrations ago. In this paper, the experience of Isotopic Applications Laboratory (CEDEX) in controlling the accuracy rate of total beta activity in samples of drinking water, inland waters and marine waters is presented. (Author)

  15. A Novel Reporting System to Improve Accuracy in Appendicitis Imaging

    Godwin, Benjamin D.; Drake, Frederick T.; Simianu, Vlad V.; Shriki, Jabi E.; Hippe, Daniel S.; Dighe, Manjiri; Bastawrous, Sarah; Cuevas, Carlos; Flum, David; Bhargava, Puneet

    2015-01-01

    OBJECTIVE The purpose of this study was to ascertain if standardized radiologic reporting for appendicitis imaging increases diagnostic accuracy. MATERIALS AND METHODS We developed a standardized appendicitis reporting system that includes objective imaging findings common in appendicitis and a certainty score ranging from 1 (definitely not appendicitis) through 5 (definitely appendicitis). Four radiologists retrospectively reviewed the preoperative CT scans of 96 appendectomy patients using our reporting system. The presence of appendicitis-specific imaging findings and certainty scores were compared with final pathology. These comparisons were summarized using odds ratios (ORs) and the AUC. RESULTS The appendix was visualized on CT in 89 patients, of whom 71 (80%) had pathologically proven appendicitis. Imaging findings associated with appendicitis included appendiceal diameter (odds ratio [OR] = 14 [> 10 vs appendicitis. In this initially indeterminate group, using the standardized reporting system, radiologists assigned higher certainty scores (4 or 5) in 21 of the 28 patients with appendicitis (75%) and lower scores (1 or 2) in five of the seven patients without appendicitis (71%) (AUC = 0.90; p = 0.001). CONCLUSION Standardized reporting and grading of objective imaging findings correlated well with postoperative pathology and may decrease the number of CT findings reported as indeterminate for appendicitis. Prospective evaluation of this reporting system on a cohort of patients with clinically suspected appendicitis is currently under way. PMID:26001230

  16. Improvement in reliability and accuracy of heater tube eddy current testing by integration with an appropriate destructive test

    Giovanelli, F.; Gabiccini, S.; Tarli, R.; Motta, P.

    1988-01-01

    A specially developed destructive test is described showing how the reliability and accuracy of a non-destructive technique can be improved if it is suitably accompanied by an appropriate destructive test. The experiment was carried out on samples of AISI 304L tubes from the low-pressure (LP) preheaters of a BWR 900 MW nuclear plant. (author)

  17. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  18. Singing Video Games May Help Improve Pitch-Matching Accuracy

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  19. Improved DORIS accuracy for precise orbit determination and geodesy

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  20. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  1. Fundamentals of modern statistical methods substantially improving power and accuracy

    Wilcox, Rand R

    2001-01-01

    Conventional statistical methods have a very serious flaw They routinely miss differences among groups or associations among variables that are detected by more modern techniques - even under very small departures from normality Hundreds of journal articles have described the reasons standard techniques can be unsatisfactory, but simple, intuitive explanations are generally unavailable Improved methods have been derived, but they are far from obvious or intuitive based on the training most researchers receive Situations arise where even highly nonsignificant results become significant when analyzed with more modern methods Without assuming any prior training in statistics, Part I of this book describes basic statistical principles from a point of view that makes their shortcomings intuitive and easy to understand The emphasis is on verbal and graphical descriptions of concepts Part II describes modern methods that address the problems covered in Part I Using data from actual studies, many examples are include...

  2. Accuracy of micro four-point probe measurements on inhomogeneous samples: A probe spacing dependence study

    Wang, Fei; Petersen, Dirch Hjorth; Østerberg, Frederik Westergaard

    2009-01-01

    In this paper, we discuss a probe spacing dependence study in order to estimate the accuracy of micro four-point probe measurements on inhomogeneous samples. Based on sensitivity calculations, both sheet resistance and Hall effect measurements are studied for samples (e.g. laser annealed samples...... the probe spacing is smaller than 1/40 of the variation wavelength, micro four-point probes can provide an accurate record of local properties with less than 1% measurement error. All the calculations agree well with previous experimental results.......) with periodic variations of sheet resistance, sheet carrier density, and carrier mobility. With a variation wavelength of ¿, probe spacings from 0.0012 to 1002 have been applied to characterize the local variations. The calculations show that the measurement error is highly dependent on the probe spacing. When...

  3. Musical Scales in Tone Sequences Improve Temporal Accuracy.

    Li, Min S; Di Luca, Massimiliano

    2018-01-01

    Predicting the time of stimulus onset is a key component in perception. Previous investigations of perceived timing have focused on the effect of stimulus properties such as rhythm and temporal irregularity, but the influence of non-temporal properties and their role in predicting stimulus timing has not been exhaustively considered. The present study aims to understand how a non-temporal pattern in a sequence of regularly timed stimuli could improve or bias the detection of temporal deviations. We presented interspersed sequences of 3, 4, 5, and 6 auditory tones where only the timing of the last stimulus could slightly deviate from isochrony. Participants reported whether the last tone was 'earlier' or 'later' relative to the expected regular timing. In two conditions, the tones composing the sequence were either organized into musical scales or they were random tones. In one experiment, all sequences ended with the same tone; in the other experiment, each sequence ended with a different tone. Results indicate higher discriminability of anisochrony with musical scales and with longer sequences, irrespective of the knowledge of the final tone. Such an outcome suggests that the predictability of non-temporal properties, as enabled by the musical scale pattern, can be a factor in determining the sensitivity of time judgments.

  4. Calibrating EASY-Care independence scale to improve accuracy

    Jotheeswaran, A. T.; Dias, Amit; Philp, Ian; Patel, Vikram; Prince, Martin

    2016-01-01

    Background there is currently limited support for the reliability and validity of the EASY-Care independence scale, with little work carried out in low- or middle-income countries. Therefore, we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Objective we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Methods three primary care physicians administered EASY-Care comprehensive geriatric assessment for 150 frail and/or dependent older people in the primary care setting. A Mokken model was applied to investigate hierarchical scaling properties of EASY-Care independence scale, and internal consistency (Cronbach's alpha) of the scale was also examined. Results we found that EASY-Care independence scale is highly internally consistent and is a strong hierarchical scale, hence providing strong evidence for unidimensionality. However, two items in the scale (unable to use telephone and manage finances) had much lower item Loevinger H coefficients than others. Exclusion of these two items improved the overall internal consistency of the scale. Conclusions the strong performance of the EASY-Care independence scale among community-dwelling frail older people is encouraging. This study confirms that EASY-Care independence scale is highly internally consistent and a strong hierarchical scale. PMID:27496925

  5. Improved backward ray tracing with stochastic sampling

    Ryu, Seung Taek; Yoon, Kyung-Hyun

    1999-03-01

    This paper presents a new technique that enhances the diffuse interreflection with the concepts of backward ray tracing. In this research, we have modeled the diffuse rays with the following conditions. First, as the reflection from the diffuse surfaces occurs in all directions, it is impossible to trace all of the reflected rays. We confined the diffuse rays by sampling the spherical angle out of the reflected rays around the normal vector. Second, the traveled distance of reflected energy from the diffuse surface differs according to the object's property, and has a comparatively short reflection distance. Considering the fact that the rays created on the diffuse surfaces affect relatively small area, it is very inefficient to trace all of the sampled diffused rays. Therefore, we set a fixed distance as the critical distance and all the rays beyond this distance are ignored. The result of this research is that as the improved backward ray tracing can model the illumination effects such as the color bleeding effects, we can replace the radiosity algorithm under the limited environment.

  6. Accuracy of self-reported height, weight and waist circumference in a Japanese sample.

    Okamoto, N; Hosono, A; Shibata, K; Tsujimura, S; Oka, K; Fujita, H; Kamiya, M; Kondo, F; Wakabayashi, R; Yamada, T; Suzuki, S

    2017-12-01

    Inconsistent results have been found in prior studies investigating the accuracy of self-reported waist circumference, and no study has investigated the validity of self-reported waist circumference among Japanese individuals. This study used the diagnostic standard of metabolic syndrome to assess the accuracy of individual's self-reported height, weight and waist circumference in a Japanese sample. Study participants included 7,443 Japanese men and women aged 35-79 years. They participated in a cohort study's baseline survey between 2007 and 2011. Participants' height, weight and waist circumference were measured, and their body mass index was calculated. Self-reported values were collected through a questionnaire before the examination. Strong correlations between measured and self-reported values for height, weight and body mass index were detected. The correlation was lowest for waist circumference (men, 0.87; women, 0.73). Men significantly overestimated their waist circumference (mean difference, 0.8 cm), whereas women significantly underestimated theirs (mean difference, 5.1 cm). The sensitivity of self-reported waist circumference using the cut-off value of metabolic syndrome was 0.83 for men and 0.57 for women. Due to systematic and random errors, the accuracy of self-reported waist circumference was low. Therefore, waist circumference should be measured without relying on self-reported values, particularly in the case of women.

  7. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  8. Improved sample size determination for attributes and variables sampling

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  9. Measurement facilities and accuracy limits of sampling digital interferometers. Meresi lehetoesegek es hibaanalizis digitalis mintavetelezoe interferometeren

    Czitrovszky, A.; Jani, P.; Szoter, L.

    1990-12-15

    We discuss the measurement facilities of a recently development sampling digital interferometer for machine tool testing. As opposed to conventional interferometers the present device provides possibilities for the digital storage up to 4 kHz of the complete information of the motion so that displacement, velocity, acceleration and power density spectrum measurement can be performed. An estimation is given for the truncation, round-off, jitter and frequency-aliasing sources of error of the reconstructed motion parameters. On the basis of the Shannon sampling theory optimal conditions of measurement parameters are defined for the case when the accuracy of the reconstructed part of motion and vibration is equal to the resolution of the conventional interferometer. 7 refs., 3 figs., 1 tab.

  10. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    Dodd, D.A.

    1995-01-01

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources

  11. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  12. An improved sampling method of complex network

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  13. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course corr...

  14. Establishment of the laser induced breakdown spectroscopy in a vacuum atmosphere for a accuracy improvement

    Kim, Seung Hyun; Kim, H. D.; Shin, H. S.

    2009-06-01

    This report describes the fundamentals of the Laser Induced Breakdown Spectroscopy(LIBS), and it describes the quantitative analysis method in the vacuum condition to obtain a high measurement accuracy. The LIBS system employs the following major components: a pulsed laser, a gas chamber, an emission spectrometer, a detector, and a computer. When the output from a pulsed laser is focused onto a small spot on a sample, an optically induced plasma, called a laser-induced plasma (LIP) is formed at the surface. The LIBS is a laser-based sensitive optical technique used to detect certain atomic and molecular species by monitoring the emission signals from a LIP. This report was described a fundamentals of the LIBS and current states of research. And, It was described a optimization of measurement condition and characteristic analysis of a LIP by measurement of the fundamental metals. The LIBS system shows about a 0.63 ∼ 5.82% measurement errors and calibration curve for the 'Cu, Cr and Ni'. It also shows about a 5% less of a measurement errors and calibration curve for a Nd and Sm. As a result, the LIBS accuracy for a part was little improved than preexistence by the optimized condition

  15. Rotary Mode Core Sample System availability improvement

    Jenkins, W.W.; Bennett, K.L.; Potter, J.D.; Cross, B.T.; Burkes, J.M.; Rogers, A.C.

    1995-01-01

    The Rotary Mode Core Sample System (RMCSS) is used to obtain stratified samples of the waste deposits in single-shell and double-shell waste tanks at the Hanford Site. The samples are used to characterize the waste in support of ongoing and future waste remediation efforts. Four sampling trucks have been developed to obtain these samples. Truck I was the first in operation and is currently being used to obtain samples where the push mode is appropriate (i.e., no rotation of drill). Truck 2 is similar to truck 1, except for added safety features, and is in operation to obtain samples using either a push mode or rotary drill mode. Trucks 3 and 4 are now being fabricated to be essentially identical to truck 2

  16. The effect of sampling frequency on the accuracy of estimates of milk ...

    Unknown

    1ARC-Animal Improvement Institute, Private Bag X5013, Stellenbosch 7599, South Africa; 2Department of Animal. Science, University of Stellenbosch, Stellenbosch, ... weekly sampling procedure currently used by the South African National Dairy Cattle Performance Testing Scheme. However, replacement of proportional ...

  17. Sensitivity and accuracy of atomic absorption spectrophotometry for trace elements in marine biological samples

    Fukai, R.; Oregioni, B.

    1976-01-01

    During the course of 1974-75 atomic absorption spectrophotometry (AAS) has been used extensively in our laboratory for measuring various trace elements in marine biological materials in order to conduct homogeneity tests on the intercalibration samples for trace metal analysis as well as to obtain baseline data for trace elements in various kinds of marine organisms collected from different locations in the Mediterranean Sea. Several series of test experiments have been conducted on the current methodology in use in our laboratory to ensure satisfactory analytical performance in measuring a number of trace elements for which analytical problems have not completely been solved. Sensitivities of the techniques used were repeatedly checked for various elements and the accuracy of the analyses were always critically evaluated by analyzing standard reference materials. The results of these test experiments have uncovered critical points relevant to the application of the AAS to routine analysis

  18. An improved ashing procedure for biologic sample

    Zongmei, Wu [Zhejiang Province Enviromental Radiation Monitoring Centre (China)

    1992-07-01

    The classical ashing procedure in muffle was modified for biologic samples. In the modified procedure the door of muffle was open in the duration of ashing process, the ashing was accelerated and the ashing product quality was comparable to that the classical procedure. The modified procedure is suitable for ashing biologic samples in large batches.

  19. An improved ashing procedure for biologic sample

    Wu Zongmei

    1992-01-01

    The classical ashing procedure in muffle was modified for biologic samples. In the modified procedure the door of muffle was open in the duration of ashing process, the ashing was accelerated and the ashing product quality was comparable to that the classical procedure. The modified procedure is suitable for ashing biologic samples in large batches

  20. The Improvement of Behavior Recognition Accuracy of Micro Inertial Accelerometer by Secondary Recognition Algorithm

    Yu Liu

    2014-05-01

    Full Text Available Behaviors of “still”, “walking”, “running”, “jumping”, “upstairs” and “downstairs” can be recognized by micro inertial accelerometer of low cost. By using the features as inputs to the well-trained BP artificial neural network which is selected as classifier, those behaviors can be recognized. But the experimental results show that the recognition accuracy is not satisfactory. This paper presents secondary recognition algorithm and combine it with BP artificial neural network to improving the recognition accuracy. The Algorithm is verified by the Android mobile platform, and the recognition accuracy can be improved more than 8 %. Through extensive testing statistic analysis, the recognition accuracy can reach 95 % through BP artificial neural network and the secondary recognition, which is a reasonable good result from practical point of view.

  1. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Ota, Keishin, E-mail: ota@microphase.co.jp [Microphase Co., Ltd., Onigakubo 1147-9, Tsukuba, Ibaragi 300-2651 (Japan); Matsuda, Tsuyoshi [Japan Science and Technology Agency, Kawaguchi-shi, Saitama 332-0012 (Japan); Tonomura, Akira [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Okinawa Institute of Science and Technology, Graduate University, Kunigami, Okinawa 904-0495 (Japan); Central Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama 350-0395 (Japan)

    2012-07-15

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: Black-Right-Pointing-Pointer A modified phase-shifting electron holography was proposed. Black-Right-Pointing-Pointer The time variation of mean intensity and contrast of holograms were corrected. Black-Right-Pointing-Pointer These corrections lead to a great improvement of the resultant phase accuracy. Black-Right-Pointing-Pointer A phase accuracy of about 1/4000 rad was achieved from experimental results.

  2. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki; Ota, Keishin; Matsuda, Tsuyoshi; Tonomura, Akira

    2012-01-01

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: ► A modified phase-shifting electron holography was proposed. ► The time variation of mean intensity and contrast of holograms were corrected. ► These corrections lead to a great improvement of the resultant phase accuracy. ► A phase accuracy of about 1/4000 rad was achieved from experimental results.

  3. Accuracy Improvement of Real-Time Location Tracking for Construction Workers

    Hyunsoo Kim

    2018-05-01

    Full Text Available Extensive research has been conducted on the real-time locating system (RTLS for tracking construction components, including workers, equipment, and materials, in order to improve construction performance (e.g., productivity improvement or accident prevention. In order to prevent safety accidents and make more sustainable construction job sites, the higher accuracy of RTLS is required. To improve the accuracy of RTLS in construction projects, this paper presents a RTLS using radio frequency identification (RFID. For this goal, this paper develops a location tracking error mitigation algorithm and presents the concept of using assistant tags. The applicability and effectiveness of the developed RTLS are tested under eight different construction environments and the test results confirm the system’s strong potential for improving the accuracy of real-time location tracking in construction projects, thus enhancing construction performance.

  4. Improvement of the accuracy of noise measurements by the two-amplifier correlation method.

    Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P

    2013-10-01

    We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.

  5. Just add water: Accuracy of analysis of diluted human milk samples using mid-infrared spectroscopy.

    Smith, R W; Adamkin, D H; Farris, A; Radmacher, P G

    2017-01-01

    To determine the maximum dilution of human milk (HM) that yields reliable results for protein, fat and lactose when analyzed by mid-infrared spectroscopy. De-identified samples of frozen HM were obtained. Milk was thawed and warmed (40°C) prior to analysis. Undiluted (native) HM was analyzed by mid-infrared spectroscopy for macronutrient composition: total protein (P), fat (F), carbohydrate (C); Energy (E) was calculated from the macronutrient results. Subsequent analyses were done with 1 : 2, 1 : 3, 1 : 5 and 1 : 10 dilutions of each sample with distilled water. Additional samples were sent to a certified lab for external validation. Quantitatively, F and P showed statistically significant but clinically non-critical differences in 1 : 2 and 1 : 3 dilutions. Differences at higher dilutions were statistically significant and deviated from native values enough to render those dilutions unreliable. External validation studies also showed statistically significant but clinically unimportant differences at 1 : 2 and 1 : 3 dilutions. The Calais Human Milk Analyzer can be used with HM samples diluted 1 : 2 and 1 : 3 and return results within 5% of values from undiluted HM. At a 1 : 5 or 1 : 10 dilution, however, results vary as much as 10%, especially with P and F. At the 1 : 2 and 1 : 3 dilutions these differences appear to be insignificant in the context of nutritional management. However, the accuracy and reliability of the 1 : 5 and 1 : 10 dilutions are questionable.

  6. Accuracy of Endometrial Sampling in Endometrial Carcinoma: A Systematic Review and Meta-analysis.

    Visser, Nicole C M; Reijnen, Casper; Massuger, Leon F A G; Nagtegaal, Iris D; Bulten, Johan; Pijnenborg, Johanna M A

    2017-10-01

    To assess the agreement between preoperative endometrial sampling and final diagnosis for tumor grade and subtype in patients with endometrial carcinoma. MEDLINE, EMBASE, ClinicalTrials.gov, and the Cochrane library were searched from inception to January 1, 2017, for studies that compared tumor grade and histologic subtype in preoperative endometrial samples and hysterectomy specimens. In eligible studies, the index test included office endometrial biopsy, hysteroscopic biopsy, or dilatation and curettage; the reference standard was hysterectomy. Outcome measures included tumor grade, histologic subtype, or both. Two independent reviewers assessed the eligibility of the studies. Risk of bias was assessed (Quality Assessment of Diagnostic Accuracy Studies). A total of 45 studies (12,459 patients) met the inclusion criteria. The pooled agreement rate on tumor grade was 0.67 (95% CI 0.60-0.75) and Cohen's κ was 0.45 (95% CI 0.34-0.55). Agreement between hysteroscopic biopsy and final diagnosis was higher (0.89, 95% CI 0.80-0.98) than for dilatation and curettage (0.70, 95% CI 0.60-0.79; P=.02); however, it was not significantly higher than for office endometrial biopsy (0.73, 95% CI 0.60-0.86; P=.08). The lowest agreement rate was found for grade 2 carcinomas (0.61, 95% CI 0.53-0.69). Downgrading was found in 25% and upgrading was found in 21% of the endometrial samples. Agreement on histologic subtypes was 0.95 (95% CI 0.94-0.97) and 0.81 (95% CI 0.69-0.92) for preoperative endometrioid and nonendometrioid carcinomas, respectively. Overall there is only moderate agreement on tumor grade between preoperative endometrial sampling and final diagnosis with the lowest agreement for grade 2 carcinomas.

  7. An improved sampling system installed for reprocessing

    Finsterwalder, L.; Zeh, H.

    1979-03-01

    Sampling devices are needed for taking representative samples from individual process containers during the reprocessing of irradiated fuel. The aqueous process stream in a reprocessing plant frequently contains, in addition to the dissolved radioactive materials, more or less small quantities of solid matter fraction of fuel material still remaining undissolved, insoluble fission-, corrosion-, or degradation products as well, in exceptional cases, ion exchange resin or silica gel. The solid matter is deposited partly on the upper surfaces of the sampling system and the radiation due to this makes maintenance and repair of the sampler more difficult. The purpose of the development work was to reduce the chance of accident and the maintenance costs and to lower the radiation exposure of the personnel. A new sampling system was developed and is described. (author)

  8. Coval: improving alignment quality and variant calling accuracy for next-generation sequencing data.

    Shunichi Kosugi

    Full Text Available Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in 'targeted' alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/.

  9. IMPROVED MOTOR-TIMING: EFFECTS OF SYNCHRONIZED METRO-NOME TRAINING ON GOLF SHOT ACCURACY

    Louise Rönnqvist

    2009-12-01

    Full Text Available This study investigates the effect of synchronized metronome training (SMT on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6 in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study's results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy

  10. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  11. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  12. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  13. Accuracy improvement of irradiation data by combining ground and satellite measurements

    Betcke, J. [Energy and Semiconductor Research Laboratory, Carl von Ossietzky University, Oldenburg (Germany); Beyer, H.G. [Department of Electrical Engineering, University of Applied Science (F.H.) Magdeburg-Stendal, Magdeburg (Germany)

    2004-07-01

    Accurate and site-specific irradiation data are essential input for optimal planning, monitoring and operation of solar energy technologies. A concrete example is the performance check of grid connected PV systems with the PVSAT-2 procedure. This procedure detects system faults in an early stage by a daily comparison of an individual reference yield with the actual yield. Calculation of the reference yield requires hourly irradiation data with a known accuracy. A field test of the predecessing PVSAT-1 procedure showed that the accuracy of the irradiation input is the determining factor for the overall accuracy of the yield calculation. In this paper we will investigate if it is possible to improve the accuracy of sitespeci.c irradiation data by combining accurate localised pyranometer data with semi-continuous satellite data.We will therefore introduce the ''Kriging of Differences'' data fusion method. Kriging of Differences also offers the possibility to estimate it's own accuracy. The obtainable accuracy gain and the effectiveness of the accuracy prediction will be investigated by validation on monthly and daily irradiation datasets. Results will be compared with the Heliosat method and interpolation of ground data. (orig.)

  14. Training readers to improve their accuracy in grading Crohn's disease activity on MRI

    Tielbeek, Jeroen A.W.; Bipat, Shandra; Boellaard, Thierry N.; Nio, C.Y.; Stoker, Jaap

    2014-01-01

    To prospectively evaluate if training with direct feedback improves grading accuracy of inexperienced readers for Crohn's disease activity on magnetic resonance imaging (MRI). Thirty-one inexperienced readers assessed 25 cases as a baseline set. Subsequently, all readers received training and assessed 100 cases with direct feedback per case, randomly assigned to four sets of 25 cases. The cases in set 4 were identical to the baseline set. Grading accuracy, understaging, overstaging, mean reading times and confidence scores (scale 0-10) were compared between baseline and set 4, and between the four consecutive sets with feedback. Proportions of grading accuracy, understaging and overstaging per set were compared using logistic regression analyses. Mean reading times and confidence scores were compared by t-tests. Grading accuracy increased from 66 % (95 % CI, 56-74 %) at baseline to 75 % (95 % CI, 66-81 %) in set 4 (P = 0.003). Understaging decreased from 15 % (95 % CI, 9-23 %) to 7 % (95 % CI, 3-14 %) (P < 0.001). Overstaging did not change significantly (20 % vs 19 %). Mean reading time decreased from 6 min 37 s to 4 min 35 s (P < 0.001). Mean confidence increased from 6.90 to 7.65 (P < 0.001). During training, overall grading accuracy, understaging, mean reading times and confidence scores improved gradually. Inexperienced readers need training with at least 100 cases to achieve the literature reported grading accuracy of 75 %. (orig.)

  15. Optical vector network analyzer with improved accuracy based on polarization modulation and polarization pulling.

    Li, Wei; Liu, Jian Guo; Zhu, Ning Hua

    2015-04-15

    We report a novel optical vector network analyzer (OVNA) with improved accuracy based on polarization modulation and stimulated Brillouin scattering (SBS) assisted polarization pulling. The beating between adjacent higher-order optical sidebands which are generated because of the nonlinearity of an electro-optic modulator (EOM) introduces considerable error to the OVNA. In our scheme, the measurement error is significantly reduced by removing the even-order optical sidebands using polarization discrimination. The proposed approach is theoretically analyzed and experimentally verified. The experimental results show that the accuracy of the OVNA is greatly improved compared to a conventional OVNA.

  16. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  17. A model to improve the accuracy of US Poison Center data collection.

    Krenzelok, E P; Reynolds, K M; Dart, R C; Green, J L

    2014-01-01

    Over 2 million human exposure calls are reported annually to United States regional poison information centers. All exposures are documented electronically and submitted to the American Association of Poison Control Center's National Poison Data System. This database represents the largest data source available on the epidemiology of pharmaceutical and non-pharmaceutical poisoning exposures. The accuracy of these data is critical; however, research has demonstrated that inconsistencies and inaccuracies exist. This study outlines the methods and results of a training program that was developed and implemented to enhance the quality of data collection using acetaminophen exposures as a model. Eleven poison centers were assigned randomly to receive either passive or interactive education to improve medical record documentation. A task force provided recommendations on educational and training strategies and the development of a quality-measurement scorecard to serve as a data collection tool to assess poison center data quality. Poison centers were recruited to participate in the study. Clinical researchers scored the documentation of each exposure record for accuracy. Results. Two thousand two hundred cases were reviewed and assessed for accuracy of data collection. After training, the overall mean quality scores were higher for both the passive (95.3%; + 1.6% change) and interactive intervention groups (95.3%; + 0.9% change). Data collection accuracy improved modestly for the overall accuracy score and significantly for the substance identification component. There was little difference in accuracy measures between the different training methods. Despite the diversity of poison centers, data accuracy, specifically substance identification data fields, can be improved by developing a standardized, systematic, targeted, and mandatory training process. This process should be considered for training on other important topics, thus enhancing the value of these data in

  18. Evaluation of precision and accuracy of neutron activation analysis method of environmental samples analysis

    Wardani, Sri; Rina M, Th.; L, Dyah

    2000-01-01

    Evaluation of precision and accuracy of Neutron Activation Analysis (NAA) method used by P2TRR performed by analyzed the standard reference samples from the National Institute of Environmental Study of Japan (NIES-CRM No.10 (rice flour) and the National Bureau of USA (NBS-SRM 1573a (tomato leave) by NAA method. In analyze the environmental SRM No.10 by NAA method in qualitatively could identified multi elements of contents, namely: Br, Ca, Co, CI, Cs, Gd, I, K< La, Mg, Mn, Na, Pa, Sb, Sm, Sr, Ta, Th, and Zn (19 elements) for SRM 1573a; As, Br, Cr, CI, Ce, Co, Cs, Fe, Ga, Hg, K, Mn, Mg, Mo, Na, Ni, Pb, Rb, Sr, Se, Sc, Sb, Ti, and Zn, (25 elements) for CRM No.10a; Ag, As, Br, Cr, CI, Ce, Cd, Co, Cs, Eu, Fe, Ga, Hg, K, Mg, Mn, Mo, Na, Nb, Pb, Rb, Sb, Sc, Th, TI, and Zn, (26 elements) for CRM No. 10b; As, Br, Co, CI, Ce, Cd, Ga, Hg, K, Mn, Mg, Mo, Na, Nb, Pb, Rb, Sb, Se, TI, and Zn (20 elementary) for CRM No.10c. In the quantitatively analysis could determined only some element of sample contents, namely: As, Co, Cd, Mo, Mn, and Zn. From the result compared with NIES or NBS values attained with deviation of 3% ∼ 15%. Overall, the result shown that the method and facilities have a good capability, but the irradiation facility and the software of spectrometry gamma ray necessary to developing or seriously research perform

  19. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    HosseiniAliabadi S. J.

    2015-06-01

    Full Text Available Background: The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective: A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method: Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result: The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion: This system can be utilized in large scale environmental monitoring with a higher accuracy

  20. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  1. The accuracy of endometrial sampling in women with postmenopausal bleeding: a systematic review and meta-analysis

    van Hanegem, Nehalennia; Prins, Marileen M. C.; Bongers, Marlies Y.; Opmeer, Brent C.; Sahota, Daljit Singh; Mol, Ben Willem J.; Timmermans, Anne

    2016-01-01

    Postmenopausal bleeding (PMB) can be the first sign of endometrial cancer. In case of thickened endometrium, endometrial sampling is often used in these women. In this systematic review, we studied the accuracy of endometrial sampling for the diagnoses of endometrial cancer, atypical hyperplasia and

  2. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    Saghafi, Mahdi; Ghofrani, Mohammad Bagher; D’Auria, Francesco

    2016-01-01

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  3. Application of FFTBM with signal mirroring to improve accuracy assessment of MELCOR code

    Saghafi, Mahdi [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); Ghofrani, Mohammad Bagher, E-mail: ghofrani@sharif.edu [Department of Energy Engineering, Sharif University of Technology, Azadi Avenue, Tehran (Iran, Islamic Republic of); D’Auria, Francesco [San Piero a Grado Nuclear Research Group (GRNSPG), University of Pisa, Via Livornese 1291, San Piero a Grado, Pisa (Italy)

    2016-11-15

    Highlights: • FFTBM-SM is an improved Fast Fourier Transform Base Method by signal mirroring. • FFTBM-SM has been applied to accuracy assessment of MELCOR code predictions. • The case studied was Station Black-Out accident in PSB-VVER integral test facility. • FFTBM-SM eliminates fluctuations of accuracy indices when signals sharply change. • Accuracy assessment is performed in a more realistic and consistent way by FFTBM-SM. - Abstract: This paper deals with the application of Fast Fourier Transform Base Method (FFTBM) with signal mirroring (FFTBM-SM) to assess accuracy of MELCOR code. This provides deeper insights into how the accuracy of MELCOR code in predictions of thermal-hydraulic parameters varies during transients. The case studied was modeling of Station Black-Out (SBO) accident in PSB-VVER integral test facility by MELCOR code. The accuracy of this thermal-hydraulic modeling was previously quantified using original FFTBM in a few number of time-intervals, based on phenomenological windows of SBO accident. Accuracy indices calculated by original FFTBM in a series of time-intervals unreasonably fluctuate when the investigated signals sharply increase or decrease. In the current study, accuracy of MELCOR code is quantified using FFTBM-SM in a series of increasing time-intervals, and the results are compared to those with original FFTBM. Also, differences between the accuracy indices of original FFTBM and FFTBM-SM are investigated and correction factors calculated to eliminate unphysical effects in original FFTBM. The main findings are: (1) replacing limited number of phenomena-based time-intervals by a series of increasing time-intervals provides deeper insights about accuracy variation of the MELCOR calculations, and (2) application of FFTBM-SM for accuracy evaluation of the MELCOR predictions, provides more reliable results than original FFTBM by eliminating the fluctuations of accuracy indices when experimental signals sharply increase or

  4. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  5. Improved sample holders for the PMMA dosimeters

    Kobayashi, Toshikazu; Sone, Koji; Iso, Katsuaki

    1994-01-01

    PMMA dosimeters are widely used for high dose dosimetry. Dose is determined by measuring the change in optical density of the irradiated PMMA dosimeter element. Measurement precision depends on the mounting method of a dosimeter element in the sample room of a spectrophotometer. We tried to prepare three types of holders, (holders A, B and C in Figs. 1-3), according to the shape of PMMA dosimeter elements. We measured optical density of the irradiated PMMA dosimeter elements by using the three types of holders. It is revealed that the holder of the type A gives more precise results for the Red 4034 or Gammachrome YR dosimeter than that of the type B. The measurements with a spectrophotometer using the type C holder gives better results for the Red acrylic dosimeter than the case of the measurements by the exclusive reader. (author)

  6. On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.

    Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis

    2016-07-01

    To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Transthoracic CT-guided biopsy with multiplanar reconstruction image improves diagnostic accuracy of solitary pulmonary nodules

    Ohno, Yoshiharu; Hatabu, Hiroto; Takenaka, Daisuke; Imai, Masatake; Ohbayashi, Chiho; Sugimura, Kazuro

    2004-01-01

    Objective: To evaluate the utility of multiplanar reconstruction (MPR) image for CT-guided biopsy and determine factors of influencing diagnostic accuracy and the pneumothorax rate. Materials and methods: 390 patients with 396 pulmonary nodules underwent transthoracic CT-guided aspiration biopsy (TNAB) and transthoracic CT-guided cutting needle core biopsy (TCNB) as follows: 250 solitary pulmonary nodules (SPNs) underwent conventional CT-guided biopsy (conventional method), 81 underwent CT-fluoroscopic biopsy (CT-fluoroscopic method) and 65 underwent conventional CT-guided biopsy in combination with MPR image (MPR method). Success rate, overall diagnostic accuracy, pneumothorax rate and total procedure time were compared in each method. Factors affecting diagnostic accuracy and pneumothorax rate of CT-guided biopsy were statistically evaluated. Results: Success rates (TNAB: 100.0%, TCNB: 100.0%) and overall diagnostic accuracies (TNAB: 96.9%, TCNB: 97.0%) of MPR were significantly higher than those using the conventional method (TNAB: 87.6 and 82.4%, TCNB: 86.3 and 81.3%) (P<0.05). Diagnostic accuracy were influenced by biopsy method, lesion size, and needle path length (P<0.05). Pneumothorax rate was influenced by pathological diagnostic method, lesion size, number of punctures and FEV1.0% (P<0.05). Conclusion: The use of MPR for CT-guided lung biopsy is useful for improving diagnostic accuracy with no significant increase in pneumothorax rate or total procedure time

  8. Accuracy Improvement of Boron Meter Adopting New Fitting Function and Multi-Detector

    Chidong Kong

    2016-12-01

    Full Text Available This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  9. Accuracy improvement of boron meter adopting new fitting function and multi-detector

    Kong, Chidong; Lee, Hyun Suk; Tak, Tae Woo; Lee, Deok Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); KIm, Si Hwan; Lyou, Seok Jean [Users Incorporated Company, Hansin S-MECA, Daejeon (Korea, Republic of)

    2016-12-15

    This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs) in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  10. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E

    2014-01-01

    OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  11. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  12. New polymorphic tetranucleotide microsatellites improve scoring accuracy in the bottlenose dolphin Tursiops aduncus

    Nater, Alexander; Kopps, Anna M.; Kruetzen, Michael

    We isolated and characterized 19 novel tetranucleotide microsatellite markers in the Indo-Pacific bottlenose dolphin (Tursiops aduncus) in order to improve genotyping accuracy in applications like large-scale population-wide paternity and relatedness assessments. One hundred T. aduncus from Shark

  13. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  14. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    Mingjun Deng

    2017-12-01

    Full Text Available The Chinese Gaofen-3 (GF-3 mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method.

  15. Evaluation of scanning 2D barcoded vaccines to improve data accuracy of vaccines administered.

    Daily, Ashley; Kennedy, Erin D; Fierro, Leslie A; Reed, Jenica Huddleston; Greene, Michael; Williams, Warren W; Evanson, Heather V; Cox, Regina; Koeppl, Patrick; Gerlach, Ken

    2016-11-11

    Accurately recording vaccine lot number, expiration date, and product identifiers, in patient records is an important step in improving supply chain management and patient safety in the event of a recall. These data are being encoded on two-dimensional (2D) barcodes on most vaccine vials and syringes. Using electronic vaccine administration records, we evaluated the accuracy of lot number and expiration date entered using 2D barcode scanning compared to traditional manual or drop-down list entry methods. We analyzed 128,573 electronic records of vaccines administered at 32 facilities. We compared the accuracy of records entered using 2D barcode scanning with those entered using traditional methods using chi-square tests and multilevel logistic regression. When 2D barcodes were scanned, lot number data accuracy was 1.8 percentage points higher (94.3-96.1%, Pmanufacturer, month vaccine was administered, and vaccine type were associated with variation in accuracy for both lot number and expiration date. Two-dimensional barcode scanning shows promise for improving data accuracy of vaccine lot number and expiration date records. Adapting systems to further integrate with 2D barcoding could help increase adoption of 2D barcode scanning technology. Published by Elsevier Ltd.

  16. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R

    2010-09-29

    Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account

  17. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    Rodriguez, A. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Ranallo, F. N. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Judy, P. F. [Brigham and Women’s Hospital, Boston, Massachusetts 02115 (United States); Gierada, D. S. [Department of Radiology, Washington University, St. Louis, Missouri 63110 (United States); Fain, S. B., E-mail: sfain@wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering,University of Wisconsin School of Engineering, Madison, Wisconsin 53706 (United States)

    2014-11-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

  18. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-01-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel

  19. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Kieran A Monaghan

    Full Text Available Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data

  20. Improvement on the accuracy of beam bugs in linear induction accelerator

    Xie Yutong; Dai Zhiyong; Han Qing

    2002-01-01

    In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree

  1. Improve accuracy and sensibility in glycan structure prediction by matching glycan isotope abundance

    Xu Guang; Liu Xin; Liu Qingyan; Zhou Yanhong; Li Jianjun

    2012-01-01

    Highlights: ► A glycan isotope pattern recognition strategy for glycomics. ► A new data preprocessing procedure to detect ion peaks in a giving MS spectrum. ► A linear soft margin SVM classification for isotope pattern recognition. - Abstract: Mass Spectrometry (MS) is a powerful technique for the determination of glycan structures and is capable of providing qualitative and quantitative information. Recent development in computational method offers an opportunity to use glycan structure databases and de novo algorithms for extracting valuable information from MS or MS/MS data. However, detecting low-intensity peaks that are buried in noisy data sets is still a challenge and an algorithm for accurate prediction and annotation of glycan structures from MS data is highly desirable. The present study describes a novel algorithm for glycan structure prediction by matching glycan isotope abundance (mGIA), which takes isotope masses, abundances, and spacing into account. We constructed a comprehensive database containing 808 glycan compositions and their corresponding isotope abundance. Unlike most previously reported methods, not only did we take into count the m/z values of the peaks but also their corresponding logarithmic Euclidean distance of the calculated and detected isotope vectors. Evaluation against a linear classifier, obtained by training mGIA algorithm with datasets of three different human tissue samples from Consortium for Functional Glycomics (CFG) in association with Support Vector Machine (SVM), was proposed to improve the accuracy of automatic glycan structure annotation. In addition, an effective data preprocessing procedure, including baseline subtraction, smoothing, peak centroiding and composition matching for extracting correct isotope profiles from MS data was incorporated. The algorithm was validated by analyzing the mouse kidney MS data from CFG, resulting in the identification of 6 more glycan compositions than the previous annotation

  2. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  3. Accuracy of reported food intake in a sample of 7-10 year-old children in Serbia.

    Šumonja, S; Jevtić, M

    2016-09-01

    Children's ability to recall and report dietary intake is affected by age and cognitive skills. Dietary intake reporting accuracy in children is associated with age, weight status, cognitive, behavioural, social factors and dietary assessment techniques. This study analysed accuracy of 7-10 year-old children's reported food intake for one day. Validation study. Sample included 94 children aged 7-10 years (median = 9 years) from two elementary schools in a local community in Serbia. 'My meals for one day' questionnaire was a combination of 24-h recall and food recognition form. It included recalls for five meals: breakfast at home; snack at home; lunch at home; snack at school and dinner at home. Parental reports were used as reference information about children's food intake for meals obtained at home and observation was used to gain reference information for school meal. Observed and reported amounts were used to calculate omission rate, intrusion rate, corresponding, over-reported and unreported amounts of energy, correspondence rate and inflation ratio. Overall omission rate (37.5%) was higher than overall intrusion rate (36.7%). The same food item (bread) has been the most often correctly reported and omitted food item for breakfast, lunch and dinner. Snack at school had the greatest mean correspondence rate (79.6%) and snack at home the highest mean inflation ratio (90.7%). Most errors in children's recalls were incorrectly reported amounts and not the food items. The questionnaire should be improved to facilitate accurate reports of the amounts. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  4. Improving substructure identification accuracy of shear structures using virtual control system

    Zhang, Dongyu; Yang, Yang; Wang, Tingqiang; Li, Hui

    2018-02-01

    Substructure identification is a powerful tool to identify the parameters of a complex structure. Previously, the authors developed an inductive substructure identification method for shear structures. The identification error analysis showed that the identification accuracy of this method is significantly influenced by the magnitudes of two key structural responses near a certain frequency; if these responses are unfavorable, the method cannot provide accurate estimation results. In this paper, a novel method is proposed to improve the substructure identification accuracy by introducing a virtual control system (VCS) into the structure. A virtual control system is a self-balanced system, which consists of some control devices and a set of self-balanced forces. The self-balanced forces counterbalance the forces that the control devices apply on the structure. The control devices are combined with the structure to form a controlled structure used to replace the original structure in the substructure identification; and the self-balance forces are treated as known external excitations to the controlled structure. By optimally tuning the VCS’s parameters, the dynamic characteristics of the controlled structure can be changed such that the original structural responses become more favorable for the substructure identification and, thus, the identification accuracy is improved. A numerical example of 6-story shear structure is utilized to verify the effectiveness of the VCS based controlled substructure identification method. Finally, shake table tests are conducted on a 3-story structural model to verify the efficacy of the VCS to enhance the identification accuracy of the structural parameters.

  5. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  6. Geometric Positioning Accuracy Improvement of ZY-3 Satellite Imagery Based on Statistical Learning Theory

    Niangang Jiao

    2018-05-01

    Full Text Available With the increasing demand for high-resolution remote sensing images for mapping and monitoring the Earth’s environment, geometric positioning accuracy improvement plays a significant role in the image preprocessing step. Based on the statistical learning theory, we propose a new method to improve the geometric positioning accuracy without ground control points (GCPs. Multi-temporal images from the ZY-3 satellite are tested and the bias-compensated rational function model (RFM is applied as the block adjustment model in our experiment. An easy and stable weight strategy and the fast iterative shrinkage-thresholding (FIST algorithm which is widely used in the field of compressive sensing are improved and utilized to define the normal equation matrix and solve it. Then, the residual errors after traditional block adjustment are acquired and tested with the newly proposed inherent error compensation model based on statistical learning theory. The final results indicate that the geometric positioning accuracy of ZY-3 satellite imagery can be improved greatly with our proposed method.

  7. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  8. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  9. Using quality scores and longer reads improves accuracy of Solexa read mapping

    Xuan Zhenyu

    2008-02-01

    Full Text Available Abstract Background Second-generation sequencing has the potential to revolutionize genomics and impact all areas of biomedical science. New technologies will make re-sequencing widely available for such applications as identifying genome variations or interrogating the oligonucleotide content of a large sample (e.g. ChIP-sequencing. The increase in speed, sensitivity and availability of sequencing technology brings demand for advances in computational technology to perform associated analysis tasks. The Solexa/Illumina 1G sequencer can produce tens of millions of reads, ranging in length from ~25–50 nt, in a single experiment. Accurately mapping the reads back to a reference genome is a critical task in almost all applications. Two sources of information that are often ignored when mapping reads from the Solexa technology are the 3' ends of longer reads, which contain a much higher frequency of sequencing errors, and the base-call quality scores. Results To investigate whether these sources of information can be used to improve accuracy when mapping reads, we developed the RMAP tool, which can map reads having a wide range of lengths and allows base-call quality scores to determine which positions in each read are more important when mapping. We applied RMAP to analyze data re-sequenced from two human BAC regions for varying read lengths, and varying criteria for use of quality scores. RMAP is freely available for downloading at http://rulai.cshl.edu/rmap/. Conclusion Our results indicate that significant gains in Solexa read mapping performance can be achieved by considering the information in 3' ends of longer reads, and appropriately using the base-call quality scores. The RMAP tool we have developed will enable researchers to effectively exploit this information in targeted re-sequencing projects.

  10. Accuracy and Precision in Elemental Analysis of Environmental Samples using Inductively Coupled Plasma-Atomic Emission Spectrometry

    Quraishi, Shamsad Begum; Chung, Yong-Sam; Choi, Kwang Soon

    2005-01-01

    Inductively Coupled Plasma-Atomic Emission Spectrometry followed by micro-wave digestion have been performed on different environmental Certified Reference Materials (CRMs). Analytical results show that accuracy and precision in ICP-AES analysis were acceptable and satisfactory in case of soil and hair CRM samples. The relative error of most of the elements in these two CRMs is within 10% with few exceptions and coefficient of variation is also less than 10%. Z-score as an analytical performance was also within the acceptable range (±2). ICP-AES was found as an inadequate method for Air Filter CRM due to incomplete dissolution, low concentration of elements and very low mass of the sample. However, real air filter sample could have been analyzed with high accuracy and precision by increasing sample mass during collection. (author)

  11. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  12. Analysis of prostate cancer localization toward improved diagnostic accuracy of transperineal prostate biopsy

    Yoshiro Sakamoto

    2014-09-01

    Conclusions: The concordance of prostate cancer between prostatectomy specimens and biopsies is comparatively favorable. According to our study, the diagnostic accuracy of transperineal prostate biopsy can be improved in our institute by including the anterior portion of the Apex-Mid and Mid regions in the 12-core biopsy or 16-core biopsy, such that a 4-core biopsy of the anterior portion is included.

  13. Accuracy of the improved quasistatic space-time method checked with experiment

    Kugler, G.; Dastur, A.R.

    1976-10-01

    Recent experiments performed at the Savannah River Laboratory have made it possible to check the accuracy of numerical methods developed to simulate space-dependent neutron transients. The experiments were specifically designed to emphasize delayed neutron holdback. The CERBERUS code using the IQS (Improved Quasistatic) method has been developed to provide a practical yet accurate tool for spatial kinetics calculations of CANDU reactors. The code was tested on the Savannah River experiments and excellent agreement was obtained. (author)

  14. Improving the accuracy of protein secondary structure prediction using structural alignment

    Gallin Warren J

    2006-06-01

    Full Text Available Abstract Background The accuracy of protein secondary structure prediction has steadily improved over the past 30 years. Now many secondary structure prediction methods routinely achieve an accuracy (Q3 of about 75%. We believe this accuracy could be further improved by including structure (as opposed to sequence database comparisons as part of the prediction process. Indeed, given the large size of the Protein Data Bank (>35,000 sequences, the probability of a newly identified sequence having a structural homologue is actually quite high. Results We have developed a method that performs structure-based sequence alignments as part of the secondary structure prediction process. By mapping the structure of a known homologue (sequence ID >25% onto the query protein's sequence, it is possible to predict at least a portion of that query protein's secondary structure. By integrating this structural alignment approach with conventional (sequence-based secondary structure methods and then combining it with a "jury-of-experts" system to generate a consensus result, it is possible to attain very high prediction accuracy. Using a sequence-unique test set of 1644 proteins from EVA, this new method achieves an average Q3 score of 81.3%. Extensive testing indicates this is approximately 4–5% better than any other method currently available. Assessments using non sequence-unique test sets (typical of those used in proteome annotation or structural genomics indicate that this new method can achieve a Q3 score approaching 88%. Conclusion By using both sequence and structure databases and by exploiting the latest techniques in machine learning it is possible to routinely predict protein secondary structure with an accuracy well above 80%. A program and web server, called PROTEUS, that performs these secondary structure predictions is accessible at http://wishart.biology.ualberta.ca/proteus. For high throughput or batch sequence analyses, the PROTEUS programs

  15. Diagnostic Accuracy of the Posttraumatic Stress Disorder Checklist–Civilian Version in a Representative Military Sample

    Karstoft, Karen-Inge; Andersen, Søren B.; Bertelsen, Mette

    2014-01-01

    This study aimed to assess the diagnostic accuracy of the Posttraumatic Stress Disorder Checklist-Civilian Version (PCL-C; Weathers, Litz, Herman, Huska, & Keane, 1993) and to establish the most accurate cutoff for prevalence estimation of posttraumatic stress disorder (PTSD) in a representative...

  16. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  17. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  18. Effectiveness of blood pressure educational and evaluation program for the improvement of measurement accuracy among nurses.

    Rabbia, Franco; Testa, Elisa; Rabbia, Silvia; Praticò, Santina; Colasanto, Claudia; Montersino, Federica; Berra, Elena; Covella, Michele; Fulcheri, Chiara; Di Monaco, Silvia; Buffolo, Fabrizio; Totaro, Silvia; Veglio, Franco

    2013-06-01

    To assess the procedure for measuring blood pressure (BP) among hospital nurses and to assess if a training program would improve technique and accuracy. 160 nurses from Molinette Hospital were included in the study. The program was based upon theoretical and practical lessons. It was one day long and it was held by trained nurses and physicians who have practice in the Hypertension Unit. An evaluation of nurses' measuring technique and accuracy was performed before and after the program, by using a 9-item checklist. Moreover we calculated the differences between measured and effective BP values before and after the training program. At baseline evaluation, we observed inadequate performance on some points of clinical BP measurement technique, specifically: only 10% of nurses inspected the arm diameter before placing the cuff, 4% measured BP in both arms, 80% placed the head of the stethoscope under the cuff, 43% did not remove all clothing that covered the location of cuff placement, did not have the patient seat comfortably with his legs uncrossed and with his back and arms supported. After the training we found a significant improvement in the technique for all items. We didn't observe any significant difference of measurement knowledge between nurses working in different settings such as medical or surgical departments. Periodical education in BP measurement may be required, and this may significantly improve the technique and consequently the accuracy.

  19. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  20. Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.

    Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C

    2016-11-01

    To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  1. Remifentanil maintains lower initial delayed nonmatching-to-sample accuracy compared to food pellets in male rhesus monkeys.

    Hutsell, Blake A; Banks, Matthew L

    2017-12-01

    Emerging human laboratory and preclinical drug self-administration data suggest that a history of contingent abused drug exposure impairs performance in operant discrimination procedures, such as delayed nonmatching-to-sample (DNMTS), that are hypothesized to assess components of executive function. However, these preclinical discrimination studies have exclusively used food as the reinforcer and the effects of drugs as reinforcers in these operant procedures are unknown. The present study determined effects of contingent intravenous remifentanil injections on DNMTS performance hypothesized to assess 1 aspect of executive function, working memory. Daily behavioral sessions consisted of 2 components with sequential intravenous remifentanil (0, 0.01-1.0 μg/kg/injection) or food (0, 1-10 pellets) availability in nonopioid dependent male rhesus monkeys (n = 3). Remifentanil functioned as a reinforcer in the DNMTS procedure. Similar delay-dependent DNMTS accuracy was observed under both remifentanil- and food-maintained components, such that higher accuracies were maintained at shorter (0.1-1.0 s) delays and lower accuracies approaching chance performance were maintained at longer (10-32 s) delays. Remifentanil maintained significantly lower initial DNMTS accuracy compared to food. Reinforcer magnitude was not an important determinant of DNMTS accuracy for either remifentanil or food. These results extend the range of experimental procedures under which drugs function as reinforcers. Furthermore, the selective remifentanil-induced decrease in initial DNMTS accuracy is consistent with a selective impairment of attentional, but not memorial, processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Nathalie Stroeymeyt

    Full Text Available BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. METHODOLOGY/PRINCIPAL FINDINGS: Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. CONCLUSIONS/SIGNIFICANCE: These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade

  3. New technology in dietary assessment: a review of digital methods in improving food record accuracy.

    Stumbo, Phyllis J

    2013-02-01

    Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.

  4. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  5. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig; Riedl, Christopher C.; Ishill, Nicole M.; Moskowitz, Chaya S.

    2010-01-01

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  6. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Riedl, Christopher C. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Medical University of Vienna, Department of Radiology, Vienna (Austria); Ishill, Nicole M.; Moskowitz, Chaya S. [Memorial Sloan-Kettering Cancer Center, Epidemiology and Biostatistics, New York, NY (United States)

    2010-04-15

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  7. Improved orientation sampling for indexing diffraction patterns of polycrystalline materials

    Larsen, Peter Mahler; Schmidt, Søren

    2017-01-01

    to that of optimally distributing points on a four‐dimensional sphere. In doing so, the number of orientation samples needed to achieve a desired indexing accuracy is significantly reduced. Orientation sets at a range of sizes are generated in this way for all Laue groups and are made available online for easy use.......Orientation mapping is a widely used technique for revealing the microstructure of a polycrystalline sample. The crystalline orientation at each point in the sample is determined by analysis of the diffraction pattern, a process known as pattern indexing. A recent development in pattern indexing...... in the presence of noise, it has very high computational requirements. In this article, the computational burden is reduced by developing a method for nearly optimal sampling of orientations. By using the quaternion representation of orientations, it is shown that the optimal sampling problem is equivalent...

  8. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  9. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  10. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  11. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  12. Improving Accuracy of Intrusion Detection Model Using PCA and optimized SVM

    Sumaiya Thaseen Ikram

    2016-06-01

    Full Text Available Intrusion detection is very essential for providing security to different network domains and is mostly used for locating and tracing the intruders. There are many problems with traditional intrusion detection models (IDS such as low detection capability against unknown network attack, high false alarm rate and insufficient analysis capability. Hence the major scope of the research in this domain is to develop an intrusion detection model with improved accuracy and reduced training time. This paper proposes a hybrid intrusiondetection model by integrating the principal component analysis (PCA and support vector machine (SVM. The novelty of the paper is the optimization of kernel parameters of the SVM classifier using automatic parameter selection technique. This technique optimizes the punishment factor (C and kernel parameter gamma (γ, thereby improving the accuracy of the classifier and reducing the training and testing time. The experimental results obtained on the NSL KDD and gurekddcup dataset show that the proposed technique performs better with higher accuracy, faster convergence speed and better generalization. Minimum resources are consumed as the classifier input requires reduced feature set for optimum classification. A comparative analysis of hybrid models with the proposed model is also performed.

  13. Improved quantification accuracy for duplex real-time PCR detection of genetically modified soybean and maize in heat processed foods

    CHENG Fang

    2013-04-01

    Full Text Available Real-time PCR technique has been widely used in quantitative GMO detection in recent years.The accuracy of GMOs quantification based on the real-time PCR methods is still a difficult problem,especially for the quantification of high processed samples.To develop the suitable and accurate real-time PCR system for high processed GM samples,we made ameliorations to several real-time PCR parameters,including re-designed shorter target DNA fragment,similar lengths of amplified endogenous and exogenous gene targets,similar GC contents and melting temperatures of PCR primers and TaqMan probes.Also,one Heat-Treatment Processing Model (HTPM was established using soybean flour samples containing GM soybean GTS 40-3-2 to validate the effectiveness of the improved real-time PCR system.Tested results showed that the quantitative bias of GM content in heat processed samples were lowered using the new PCR system.The improved duplex real-time PCR was further validated using processed foods derived from GM soybean,and more accurate GM content values in these foods was also achieved.These results demonstrated that the improved duplex real-time PCR would be quite suitable in quantitative detection of high processed food products.

  14. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-09

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.

  15. Quality systems for radiotherapy: Impact by a central authority for improved accuracy, safety and accident prevention

    Jaervinen, H.; Sipilae, P.; Parkkinen, R.; Kosunen, A.; Jokelainen, I.

    2001-01-01

    High accuracy in radiotherapy is required for the good outcome of the treatments, which in turn implies the need to develop comprehensive Quality Systems for the operation of the clinic. The legal requirements as well as the recommendation by professional societies support this modern approach for improved accuracy, safety and accident prevention. The actions of a national radiation protection authority can play an important role in this development. In this paper, the actions of the authority in Finland (STUK) for the control of the implementation of the new requirements are reviewed. It is concluded that the role of the authorities should not be limited to simple control actions, but comprehensive practical support for the development of the Quality Systems should be provided. (author)

  16. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  17. Improving the Acquisition and Management of Sample Curation Data

    Todd, Nancy S.; Evans, Cindy A.; Labasse, Dan

    2011-01-01

    This paper discusses the current sample documentation processes used during and after a mission, examines the challenges and special considerations needed for designing effective sample curation data systems, and looks at the results of a simulated sample result mission and the lessons learned from this simulation. In addition, it introduces a new data architecture for an integrated sample Curation data system being implemented at the NASA Astromaterials Acquisition and Curation department and discusses how it improves on existing data management systems.

  18. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  19. Accuracy assessment of the National Forest Inventory map of Mexico: sampling designs and the fuzzy characterization of landscapes

    Stéphane Couturier

    2009-10-01

    Full Text Available There is no record so far in the literature of a comprehensive method to assess the accuracy of regional scale Land Cover/ Land Use (LCLU maps in the sub-tropical belt. The elevated biodiversity and the presence of highly fragmented classes hamper the use of sampling designs commonly employed in previous assessments of mainly temperate zones. A sampling design for assessing the accuracy of the Mexican National Forest Inventory (NFI map at community level is presented. A pilot study was conducted on the Cuitzeo Lake watershed region covering 400 000 ha of the 2000 Landsat-derived map. Various sampling designs were tested in order to find a trade-off between operational costs, a good spatial distribution of the sample and the inclusion of all scarcely distributed classes (‘rare classes’. A two-stage sampling design where the selection of Primary Sampling Units (PSU was done under separate schemes for commonly and scarcely distributed classes, showed best characteristics. A total of 2 023 punctual secondary sampling units were verified against their NFI map label. Issues regarding the assessment strategy and trends of class confusions are devised.

  20. Improved precision and accuracy for microarrays using updated probe set definitions

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  1. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  2. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  3. Improving the accuracy of acetabular cup implantation using a bulls-eye spirit level.

    Macdonald, Duncan; Gupta, Sanjay; Ohly, Nicholas E; Patil, Sanjeev; Meek, R; Mohammed, Aslam

    2011-01-01

    Acetabular introducers have a built-in inclination of 45 degrees to the handle shaft. With patients in the lateral position, surgeons aim to align the introducer shaft vertical to the floor to implant the acetabulum at 45 degrees. We aimed to determine if a bulls-eye spirit level attached to an introducer improved the accuracy of implantation. A small circular bulls-eye spirit level was attached to the handle of an acetabular introducer. A saw bone hemipelvis was fixed to a horizontal, flat surface. A cement substitute was placed in the acetabulum and subjects were asked to implant a polyethylene cup, aiming to obtain an angle of inclination of 45 degrees. Two attempts were made with the spirit level masked and two with it unmasked. The distance of the air bubble from the spirit level's center was recorded by a single assessor. The angle of inclination of the acetabular component was then calculated. Subjects included both orthopedic consultants and trainees. Twenty-five subjects completed the study. Accuracy of acetabular implantation when using the unmasked spirit level improved significantly in all grades of surgeon. With the spirit level masked, 12 out of 50 attempts were accurate at 45 degrees inclination; 11 out of 50 attempts were "open," with greater than 45 degrees of inclination, and 27 were "closed," with less than 45 degrees. With the spirit level visible, all subjects achieved an inclination angle of exactly 45 degrees. A simple device attached to the handle of an acetabular introducer can significantly improve the accuracy of implantation of a cemented cup into a saw bone pelvis in the lateral position.

  4. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort.

    Mah, K; Danjoux, C E; Manship, S; Makhani, N; Cardoso, M; Sixel, K E

    1998-07-15

    To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of > or =5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall, it has improved staff efficiency and resource utilization.

  5. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort

    Mah, Katherine; Danjoux, Cyril E.; Manship, Sharan; Makhani, Nadiya; Cardoso, Marlene; Sixel, Katharina E.

    1998-01-01

    Purpose: To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. Methods and Materials: A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Results: Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was <10 min in all cases. In the absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of ≥5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Conclusions: Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall

  6. Improving the accuracy of energy baseline models for commercial buildings with occupancy data

    Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping

    2016-01-01

    Highlights: • We evaluated several baseline models predicting energy use in buildings. • Including occupancy data improved accuracy of baseline model prediction. • Occupancy is highly correlated with energy use in buildings. • This simple approach can be used in decision makings of energy retrofit projects. - Abstract: More than 80% of energy is consumed during operation phase of a building’s life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essential for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. The results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.

  7. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  8. Does PACS improve diagnostic accuracy in chest radiograph interpretations in clinical practice?

    Hurlen, Petter; Borthne, Arne; Dahl, Fredrik A.; Østbye, Truls; Gulbrandsen, Pål

    2012-01-01

    Objectives: To assess the impact of a Picture Archiving and Communication System (PACS) on the diagnostic accuracy of the interpretation of chest radiology examinations in a “real life” radiology setting. Materials and methods: During a period before PACS was introduced to radiologists, when images were still interpreted on film and reported on paper, images and reports were also digitally stored in an image database. The same database was used after the PACS introduction. This provided a unique opportunity to conduct a blinded retrospective study, comparing sensitivity (the main outcome parameter) in the pre and post-PACS periods. We selected 56 digitally stored chest radiograph examinations that were originally read and reported on film, and 66 examinations that were read and reported on screen 2 years after the PACS introduction. Each examination was assigned a random number, and both reports and images were scored independently for pathological findings. The blinded retrospective score for the original reports were then compared with the score for the images (the gold standard). Results: Sensitivity was improved after the PACS introduction. When both certain and uncertain findings were included, this improvement was statistically significant. There were no other statistically significant changes. Conclusion: The result is consistent with prospective studies concluding that diagnostic accuracy is at least not reduced after PACS introduction. The sensitivity may even be improved.

  9. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  10. Secondary Signs May Improve the Diagnostic Accuracy of Equivocal Ultrasounds for Suspected Appendicitis in Children

    Partain, Kristin N.; Patel, Adarsh; Travers, Curtis; McCracken, Courtney; Loewen, Jonathan; Braithwaite, Kiery; Heiss, Kurt F.; Raval, Mehul V.

    2016-01-01

    Introduction Ultrasound (US) is the preferred imaging modality for evaluating appendicitis. Our purpose was to determine if including secondary signs (SS) improves diagnostic accuracy in equivocal US studies. Methods Retrospective review identified 825 children presenting with concern for appendicitis and with a right lower quadrant (RLQ) US. Regression models identified which SS were associated with appendicitis. Test characteristics were demonstrated. Results 530 patients (64%) had equivocal US reports. Of 114 (22%) patients with equivocal US undergoing CT, those with SS were more likely to have appendicitis (48.6% vs 14.6%, pappendicitis (61.0% vs 33.6%, pappendicitis included fluid collection (adjusted odds ratio (OR) 13.3, 95% Confidence Interval (CI) 2.1–82.8), hyperemia (OR=2.0, 95%CI 1.5–95.5), free fluid (OR=9.8, 95%CI 3.8–25.4), and appendicolith (OR=7.9, 95%CI 1.7–37.2). Wall thickness, bowel peristalsis, and echogenic fat were not associated with appendicitis. Equivocal US that included hyperemia, a fluid collection, or an appendicolith had 96% specificity and 88% accuracy. Conclusion Use of SS in RLQ US assists in the diagnostic accuracy of appendicitis. SS may guide clinicians and reduce unnecessary CT and admissions. PMID:27039121

  11. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  12. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  13. The Single-Molecule Centroid Localization Algorithm Improves the Accuracy of Fluorescence Binding Assays.

    Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip

    2018-03-13

    Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.

  14. Does imprint cytology improve the accuracy of transrectal prostate needle biopsy?

    Sayar, Hamide; Bulut, Burak Besir; Bahar, Abdulkadir Yasir; Bahar, Mustafa Remzi; Seringec, Nurten; Resim, Sefa; Çıralık, Harun

    2015-02-01

    To evaluate the accuracy of imprint cytology of core needle biopsy specimens in the diagnosis of prostate cancer. Between December 24, 2011 and May 9, 2013, patients with an abnormal DRE and/or serum PSA level of >2.5 ng/mL underwent transrectal prostate needle biopsy. Samples with positive imprint cytology but negative initial histologic exam underwent repeat sectioning and histological examination. 1,262 transrectal prostate needle biopsy specimens were evaluated from 100 patients. Malignant imprint cytology was found in 236 specimens (18.7%), 197 (15.6%) of which were confirmed by histologic examination, giving an initial 3.1% (n = 39) rate of discrepant results by imprint cytology. Upon repeat sectioning and histologic examination of these 39 biopsy samples, 14 (1.1% of the original specimens) were then diagnosed as malignant, 3 (0.2%) as atypical small acinar proliferation (ASAP), and 5 (0.4%) as high-grade prostatic intraepithelial neoplasia (HGPIN). Overall, 964 (76.4%) specimens were negative for malignancy by imprint cytology. Seven (0.6%) specimens were benign by cytology but malignant cells were found on histological evaluation. On imprint cytology examination, nonmalignant but abnormal findings were seen in 62 specimens (4.9%). These were all due to benign processes. After reexamination, the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, false-positive rate, false-negative rate of imprint preparations were 98.1, 96.9, 98.4, 92.8, 99.3, 1.6, 3.1%, respectively. Imprint cytology is valuable tool for evaluating TRUS-guided core needle biopsy specimens from the prostate. Use of imprint cytology in combination with histopathology increases diagnostic accuracy when compared with histopathologic assessment alone. © 2014 Wiley Periodicals, Inc.

  15. Inverse modeling applied to Scanning Capacitance Microscopy for improved spatial resolution and accuracy

    McMurray, J. S.; Williams, C. C.

    1998-01-01

    Scanning Capacitance Microscopy (SCM) is capable of providing two-dimensional information about dopant and carrier concentrations in semiconducting devices. This information can be used to calibrate models used in the simulation of these devices prior to manufacturing and to develop and optimize the manufacturing processes. To provide information for future generations of devices, ultra-high spatial accuracy (<10 nm) will be required. One method, which potentially provides a means to obtain these goals, is inverse modeling of SCM data. Current semiconducting devices have large dopant gradients. As a consequence, the capacitance probe signal represents an average over the local dopant gradient. Conversion of the SCM signal to dopant density has previously been accomplished with a physical model which assumes that no dopant gradient exists in the sampling area of the tip. The conversion of data using this model produces results for abrupt profiles which do not have adequate resolution and accuracy. A new inverse model and iterative method has been developed to obtain higher resolution and accuracy from the same SCM data. This model has been used to simulate the capacitance signal obtained from one and two-dimensional ideal abrupt profiles. This simulated data has been input to a new iterative conversion algorithm, which has recovered the original profiles in both one and two dimensions. In addition, it is found that the shape of the tip can significantly impact resolution. Currently SCM tips are found to degrade very rapidly. Initially the apex of the tip is approximately hemispherical, but quickly becomes flat. This flat region often has a radius of about the original hemispherical radius. This change in geometry causes the silicon directly under the disk to be sampled with approximately equal weight. In contrast, a hemispherical geometry samples most strongly the silicon centered under the SCM tip and falls off quickly with distance from the tip's apex. Simulation

  16. Improved accuracy of quantitative parameter estimates in dynamic contrast-enhanced CT study with low temporal resolution

    Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Haider, Masoom A. [Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, Ontario M4N 3M5, Canada and Department of Medical Imaging, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Jaffray, David A. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Yeung, Ivan W. T. [Radiation Medicine Program, Princess Margaret Hospital/University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Physics, Stronach Regional Cancer Centre, Southlake Regional Health Centre, Newmarket, Ontario L3Y 2P9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2016-01-15

    Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarsely sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the

  17. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2017-08-01

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute setting Methods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented. Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care. Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted. What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type. What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting. What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has

  18. An Improvement to Interval Estimation for Small Samples

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  19. Improvements to the Chebyshev expansion of attenuation correction factors for cylindrical samples

    Mildner, D.F.R.; Carpenter, J.M.

    1990-01-01

    The accuracy of the Chebyshev expansion coefficients used for the calculation of attenuation correction factors for cylinderical samples has been improved. An increased order of expansion allows the method to be useful over a greater range of attenuation. It is shown that many of these coefficients are exactly zero, others are rational numbers, and others are rational frations of π -1 . The assumptions of Sears in his asymptotic expression of the attenuation correction factor are also examined. (orig.)

  20. Testing the accuracy of a Bayesian central-dose model for single-grain OSL, using known-age samples

    Guerin, Guillaume; Combès, Benoit; Lahaye, Christelle

    2015-01-01

    on multi-grain OSL age estimates, these samples are presumed to have been both well-bleached at burial, and unaffected by mixing after deposition. Two ways of estimating single-grain ages are then compared: the standard approach on the one hand, consisting of applying the Central Age Model to De values...... for well-bleached samples; (ii) dose recovery experiments do not seem to be a very reliable tool to estimate the accuracy of a SAR measurement protocol for age determination....

  1. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  2. Accuracy of subcutaneous continuous glucose monitoring in critically ill adults: improved sensor performance with enhanced calibrations.

    Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman

    2014-02-01

    Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.

  3. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  4. Improvement of diagnostic accuracy, and clinical evaluation of computed tomography and ultrasonography for deep seated cancers

    Arimizu, Noboru

    1980-01-01

    Cancers of the liver, gallbladder, and pancreas which were difficult to be detected at an early stage were studied. Diagnostic accuracy of CT and ultrasonography for vesectable small cancers was investigated by the project team and coworkers. Only a few cases of hepatocellular carcinoma, cancer of the common bile duct, and cancer of the pancreas head, with the maximum diameter of 1 - 2 cm, were able to be diagnosed by CT. There seemed to be more false negative cases with small cancers of that size. The limit of the size which could be detected by CT was thought to be 2 - 3 cm. Similar results were obtained by ultrasonography. Cancer of the pancreas body with the maximum diameter of less than 3.5 cm could not be detected by both CT and ultrasonography. Diagnostic accuracy of CT for liver cancer was improved by selective intraarterial injection of contrast medium. Improvement of the quality of ultrasonograms was achieved through this study. Merits and demerits of CT and ultrasonography were also compared. (Tsunoda, M.)

  5. Improvement of diagnostic accuracy, and clinical evaluation of computed tomography and ultrasonography for deep seated cancers

    Arimizu, N [Chiba Univ. (Japan). School of Medicine

    1980-06-01

    Cancers of the liver, gallbladder, and pancreas which were difficult to be detected at an early stage were studied. Diagnostic accuracy of CT and ultrasonography for resectable small cancers was investigated by the project team and co-workers. Only a few cases of hepatocellular carcinoma, cancer of the common bile duct, and cancer of the pancreas head, with the maximum diameter of 1 - 2 cm, were able to be diagnosed by CT. There seemed to be more false negative cases with small cancers of that size. The limit of the size which could be detected by CT was thought to be 2 - 3 cm. Similar results were obtained by ultrasonography. Cancer of the pancreas body with the maximum diameter of less than 3.5 cm could not be detected by both CT and ultrasonography. Diagnostic accuracy of CT for liver cancer was improved by selective intraarterial injection of contrast medium. Improvement of the quality of ultrasonograms was achieved through this study. Merits and demerits of CT and ultrasonography were also compared.

  6. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Oleksandr Makeyev

    2016-06-01

    Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  7. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  8. Improvement of Measurement Accuracy of Coolant Flow in a Test Loop

    Hong, Jintae; Kim, Jong-Bum; Joung, Chang-Young; Ahn, Sung-Ho; Heo, Sung-Ho; Jang, Seoyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this study, to improve the measurement accuracy of coolant flow in a coolant flow simulator, elimination of external noise are enhanced by adding ground pattern in the control panel and earth around signal cables. In addition, a heating unit is added to strengthen the fluctuation signal by heating the coolant because the source of signals are heat energy. Experimental results using the improved system shows good agreement with the reference flow rate. The measurement error is reduced dramatically compared with the previous measurement accuracy and it will help to analyze the performance of nuclear fuels. For further works, out of pile test will be carried out by fabricating a test rig mockup and inspect the feasibility of the developed system. To verify the performance of a newly developed nuclear fuel, irradiation test needs to be carried out in the research reactor and measure the irradiation behavior such as fuel temperature, fission gas release, neutron dose, coolant temperature, and coolant flow rate. In particular, the heat generation rate of nuclear fuels can be measured indirectly by measuring temperature variation of coolant which passes by the fuel rod and its flow rate. However, it is very difficult to measure the flow rate of coolant at the fuel rod owing to the narrow gap between components of the test rig. In nuclear fields, noise analysis using thermocouples in the test rig has been applied to measure the flow velocity of coolant which circulates through the test loop.

  9. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  10. A novel method for improved accuracy of transcription factor binding site prediction

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  11. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  12. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees.

    Keller, Alexander; Förster, Frank; Müller, Tobias; Dandekar, Thomas; Schultz, Jörg; Wolf, Matthias

    2010-01-15

    In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber) and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  13. A novel method for improved accuracy of transcription factor binding site prediction

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  14. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  15. Accuracy of an improved device for remote measuring of tree-trunk diameters

    Matsushita, T.; Kato, S.; Komiyama, A.

    2000-01-01

    For measuring the diameters of tree trunks from a distant position, a recent device using a laser beam was developed by Kantou. We improved this device to serve our own practical purposes. The improved device consists of a 1-m-long metal caliper and a small telescope sliding smoothly onto it. Using the cross hairs in the scope, one can measure both edges of an object on the caliper and calculate its length. The laser beam is used just for guiding the telescopic sights to the correct positions on the object. In this study, the accuracy of this new device was examined by measuring objects of differing lengths, the distance from the object, and the angle of elevation to the object. Since each result of the experiment predicted absolute errors of measurement of less than 3 mm, this new device will be suitable for the measurement of trunk diameters in the field

  16. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  17. Pigeons exhibit higher accuracy for chosen memory tests than for forced memory tests in duration matching-to-sample.

    Adams, Allison; Santi, Angelo

    2011-03-01

    Following training to match 2- and 8-sec durations of feederlight to red and green comparisons with a 0-sec baseline delay, pigeons were allowed to choose to take a memory test or to escape the memory test. The effects of sample omission, increases in retention interval, and variation in trial spacing on selection of the escape option and accuracy were studied. During initial testing, escaping the test did not increase as the task became more difficult, and there was no difference in accuracy between chosen and forced memory tests. However, with extended training, accuracy for chosen tests was significantly greater than for forced tests. In addition, two pigeons exhibited higher accuracy on chosen tests than on forced tests at the short retention interval and greater escape rates at the long retention interval. These results have not been obtained in previous studies with pigeons when the choice to take the test or to escape the test is given before test stimuli are presented. It appears that task-specific methodological factors may determine whether a particular species will exhibit the two behavioral effects that were initially proposed as potentially indicative of metacognition.

  18. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  19. The Accuracy of Inference in Small Samples of Dynamic Panel Data Models

    Bun, M.J.G.; Kiviet, J.F.

    2001-01-01

    Through Monte Carlo experiments the small sample behavior is examined of various inference techniques for dynamic panel data models when both the time-series and cross-section dimensions of the data set are small. The LSDV technique and corrected versions of it are compared with IV and GMM

  20. Sampling and assessment accuracy in mate choice: a random-walk model of information processing in mating decision.

    Castellano, Sergio; Cermelli, Paolo

    2011-04-07

    Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Enhanced systems for measuring and monitoring REDD+: Opportunities to improve the accuracy of emission factor and activity data in Indonesia

    Solichin

    The importance of accurate measurement of forest biomass in Indonesia has been growing ever since climate change mitigation schemes, particularly the reduction of emissions from deforestation and forest degradation scheme (known as REDD+), were constitutionally accepted by the government of Indonesia. The need for an accurate system of historical and actual forest monitoring has also become more pronounced, as such a system would afford a better understanding of the role of forests in climate change and allow for the quantification of the impact of activities implemented to reduce greenhouse gas emissions. The aim of this study was to enhance the accuracy of estimations of carbon stocks and to monitor emissions in tropical forests. The research encompassed various scales (from trees and stands to landscape-sized scales) and a wide range of aspects, from evaluation and development of allometric equations to exploration of the potential of existing forest inventory databases and evaluation of cutting-edge technology for non-destructive sampling and accurate forest biomass mapping over large areas. In this study, I explored whether accuracy--especially regarding the identification and reduction of bias--of forest aboveground biomass (AGB) estimates in Indonesia could be improved through (1) development and refinement of allometric equations for major forest types, (2) integration of existing large forest inventory datasets, (3) assessing nondestructive sampling techniques for tree AGB measurement, and (4) landscape-scale mapping of AGB and forest cover using lidar. This thesis provides essential foundations to improve the estimation of forest AGB at tree scale through development of new AGB equations for several major forest types in Indonesia. I successfully developed new allometric equations using large datasets from various forest types that enable us to estimate tree aboveground biomass for both forest type specific and generic equations. My models outperformed

  2. Review: Diagnostic accuracy of PCR-based detection tests for Helicobacter Pylori in stool samples.

    Khadangi, Fatemeh; Yassi, Maryam; Kerachian, Mohammad Amin

    2017-12-01

    Although different methods have been established to detect Helicobacter pylori (H. pylori) infection, identifying infected patients is an ongoing challenge. The aim of this meta-analysis was to provide pooled diagnostic accuracy measures for stool PCR test in the diagnosis of H. pylori infection. In this study, a systematic review and meta-analysis were carried out on various sources, including MEDLINE, Web of Sciences, and the Cochrane Library from April 1, 1999, to May 1, 2016. This meta-analysis adheres to the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses report (PRISMA Statement). The clinical value of DNA stool PCR test was based on the pooled false positive, false negative, true positive, and true negative of different genes. Twenty-six of 328 studies identified met the eligibility criteria. Stool PCR test had a performance of 71% (95% CI: 68-73) sensitivity, 96% (95% CI: 94-97) specificity, and 65.6 (95% CI: 30.2-142.5) diagnostic odds ratio (DOR) in diagnosis of H. pylori. The DOR of genes which showed the highest performance of stool PCR tests was as follows: 23S rRNA 152.5 (95% CI: 55.5-418.9), 16S rRNA 67.9 (95%CI: 6.4-714.3), and glmM 68.1 (95%CI: 20.1-231.7). The sensitivity and specificity of stool PCR test are relatively in the same spectrum of other diagnostic methods for the detection of H. pylori infection. In descending order of significance, the most diagnostic candidate genes using PCR detection were 23S rRNA, 16S rRNA, and glmM. PCR for 23S rRNA gene which has the highest performance could be applicable to detect H. pylori infection. © 2017 John Wiley & Sons Ltd.

  3. Iterative metal artifact reduction improves dose calculation accuracy. Phantom study with dental implants

    Maerz, Manuel; Mittermair, Pia; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany); Krauss, Andreas [Siemens Healthcare GmbH, Forchheim (Germany)

    2016-06-15

    Metallic dental implants cause severe streaking artifacts in computed tomography (CT) data, which affect the accuracy of dose calculations in radiation therapy. The aim of this study was to investigate the benefit of the metal artifact reduction algorithm iterative metal artifact reduction (iMAR) in terms of correct representation of Hounsfield units (HU) and dose calculation accuracy. Heterogeneous phantoms consisting of different types of tissue equivalent material surrounding metallic dental implants were designed. Artifact-containing CT data of the phantoms were corrected using iMAR. Corrected and uncorrected CT data were compared to synthetic CT data to evaluate accuracy of HU reproduction. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated in Oncentra v4.3 on corrected and uncorrected CT data and compared to Gafchromic trademark EBT3 films to assess accuracy of dose calculation. The use of iMAR increased the accuracy of HU reproduction. The average deviation of HU decreased from 1006 HU to 408 HU in areas including metal and from 283 HU to 33 HU in tissue areas excluding metal. Dose calculation accuracy could be significantly improved for all phantoms and plans: The mean passing rate for gamma evaluation with 3 % dose tolerance and 3 mm distance to agreement increased from 90.6 % to 96.2 % if artifacts were corrected by iMAR. The application of iMAR allows metal artifacts to be removed to a great extent which leads to a significant increase in dose calculation accuracy. (orig.) [German] Metallische Implantate verursachen streifenfoermige Artefakte in CT-Bildern, welche die Dosisberechnung beeinflussen. In dieser Studie soll der Nutzen des iterativen Metall-Artefakt-Reduktions-Algorithmus iMAR hinsichtlich der Wiedergabetreue von Hounsfield-Werten (HU) und der Genauigkeit von Dosisberechnungen untersucht werden. Es wurden heterogene Phantome aus verschiedenen Arten gewebeaequivalenten Materials mit

  4. Improving the accuracy of Møller-Plesset perturbation theory with neural networks

    McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.

    2017-10-01

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  5. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  6. Impact of sampling interval in training data acquisition on intrafractional predictive accuracy of indirect dynamic tumor-tracking radiotherapy.

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-08-01

    To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling

  7. On the maximal use of Monte Carlo samples: re-weighting events at NLO accuracy

    Mattelaer, Olivier [Durham University, Institute for Particle Physics Phenomenology (IPPP), Durham (United Kingdom)

    2016-12-15

    Accurate Monte Carlo simulations for high-energy events at CERN's Large Hadron Collider, are very expensive, both from the computing and storage points of view. We describe a method that allows to consistently re-use parton-level samples accurate up to NLO in QCD under different theoretical hypotheses. We implement it in MadGraph5{sub a}MC rate at NLO and show its validation by applying it to several cases of practical interest for the search of new physics at the LHC. (orig.)

  8. Attitude Modeling Using Kalman Filter Approach for Improving the Geometric Accuracy of Cartosat-1 Data Products

    Nita H. SHAH

    2010-07-01

    Full Text Available This paper deals with the rigorous photogrammetric solution to model the uncertainty in the orientation parameters of Indian Remote Sensing Satellite IRS-P5 (Cartosat-1. Cartosat-1 is a three axis stabilized spacecraft launched into polar sun-synchronous circular orbit at an altitude of 618 km. The satellite has two panchromatic (PAN cameras with nominal resolution of ~2.5 m. The camera looking ahead is called FORE mounted with +26 deg angle and the other looking near nadir is called AFT mounted with -5 deg, in along track direction. Data Product Generation Software (DPGS system uses the rigorous photogrammetric Collinearity model in order to utilize the full system information, together with payload geometry & control points, for estimating the uncertainty in attitude parameters. The initial orbit, attitude knowledge is obtained from GPS bound orbit measurement, star tracker and gyros. The variations in satellite attitude with time are modelled using simple linear polynomial model. Also, based on this model, Kalman filter approach is studied and applied to improve the uncertainty in the orientation of spacecraft with high quality ground control points (GCPs. The sequential estimator (Kalman filter is used in an iterative process which corrects the parameters at each time of observation rather than at epoch time. Results are presented for three stereo data sets. The accuracy of model depends on the accuracy of the control points.

  9. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L. Improved by Accounting for Linkage Disequilibrium

    Guillaume P. Ramstein

    2016-04-01

    Full Text Available Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  10. METACOGNITIVE SCAFFOLDS IMPROVE SELF-JUDGMENTS OF ACCURACY IN A MEDICAL INTELLIGENT TUTORING SYSTEM.

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S

    2014-03-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one ( N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of Nodular and Diffuse Dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for two hours before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman-Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments.

  11. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    Deborah A. Kerr

    2015-05-01

    Full Text Available The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24, were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01, detailed description (p < 0.05 and portion size matching (p < 0.05. Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods. The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05 and second recall (10.1% ± 20.8% compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days.

  12. Open magnetic resonance imaging using titanium-zirconium needles: improved accuracy for interstitial brachytherapy implants?

    Popowski, Youri; Hiltbrand, Emile; Joliat, Dominique; Rouzaud, Michel

    2000-01-01

    Purpose: To evaluate the benefit of using an open magnetic resonance (MR) machine and new MR-compatible needles to improve the accuracy of brachytherapy implants in pelvic tumors. Methods and Materials: The open MR machine, foreseen for interventional procedures, allows direct visualization of the pelvic structures that are to be implanted. For that purpose, we have developed MR- and CT-compatible titanium-zirconium (Ti-Zr) brachytherapy needles that allow implantations to be carried out under the magnetic field. In order to test the technical feasibility of this new approach, stainless steel (SS) and Ti-Zr needles were first compared in a tissue-equivalent phantom. In a second step, two patients implanted with Ti-Zr needles in the brachytherapy operating room were scanned in the open MR machine. In a third phase, four patients were implanted directly under open MR control. Results: The artifacts induced by both materials were significantly different, strongly favoring the Ti-Zr needles. The implantation in both first patients confirmed the excellent quality of the pictures obtained with the needles in vivo and showed suboptimal implant geometry in both patients. In the next 4 patients, the tumor could be punctured with excellent accuracy, and the adjacent structures could be easily avoided. Conclusion: We conclude that open MR using MR-compatible needles is a very promising tool in brachytherapy, especially for pelvic tumors

  13. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  14. Accuracy improvement in the TDR-based localization of water leaks

    Andrea Cataldo

    Full Text Available A time domain reflectometry (TDR-based system for the localization of water leaks has been recently developed by the authors. This system, which employs wire-like sensing elements to be installed along the underground pipes, has proven immune to the limitations that affect the traditional, acoustic leak-detection systems.Starting from the positive results obtained thus far, in this work, an improvement of this TDR-based system is proposed. More specifically, the possibility of employing a low-cost, water-absorbing sponge to be placed around the sensing element for enhancing the accuracy in the localization of the leak is addressed.To this purpose, laboratory experiments were carried out mimicking a water leakage condition, and two sensing elements (one embedded in a sponge and one without sponge were comparatively used to identify the position of the leak through TDR measurements. Results showed that, thanks to the water retention capability of the sponge (which maintains the leaked water more localized, the sensing element embedded in the sponge leads to a higher accuracy in the evaluation of the position of the leak. Keywords: Leak localization, TDR, Time domain reflectometry, Water leaks, Underground water pipes

  15. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  16. Accuracy Improvement of Discharge Measurement with Modification of Distance Made Good Heading

    Jongkook Lee

    2016-01-01

    Full Text Available Remote control boats equipped with an Acoustic Doppler Current Profiler (ADCP are widely accepted and have been welcomed by many hydrologists for water discharge, velocity profile, and bathymetry measurements. The advantages of this technique include high productivity, fast measurements, operator safety, and high accuracy. However, there are concerns about controlling and operating a remote boat to achieve measurement goals, especially during extreme events such as floods. When performing river discharge measurements, the main error source stems from the boat path. Due to the rapid flow in a flood condition, the boat path is not regular and this can cause errors in discharge measurements. Therefore, improvement of discharge measurements requires modification of boat path. As a result, the measurement errors in flood flow conditions are 12.3–21.8% before the modification of boat path, but 1.2–3.7% after the DMG modification of boat path. And it is considered that the modified discharges are very close to the observed discharge in the flood flow conditions. In this study, through the distance made good (DMG modification of the boat path, a comprehensive discharge measurement with high accuracy can be achieved.

  17. Improved mesh based photon sampling techniques for neutron activation analysis

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-01-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  18. Accuracy improvement of the laplace transformation method for determination of the bremsstrahlung spectra in clinical accelerators

    Scheithauer, M.; Schwedas, M.; Wiezorek, T.; Wendt, T.

    2003-01-01

    The present study focused on the reconstruction of the bremsstrahlung spectrum of a clinical linear accelerator from the measured transmission curve, with the aim of improving the accuracy of this method. The essence of the method was the analytic inverse Laplace transform of a parameter function fitted to the measured transmission curve. We tested known fitting functions, however they resulted in considerable fitting inaccuracy, leading to inaccuracies of the bremsstrahlung spectrum. In order to minimise the fitting errors, we employed a linear combination of n equations with 2n-1 parameters. The fitting errors are now considerably smaller. The measurement of the transmission function requires that the energy-dependent detector response is taken into account. We analysed the underlying physical context and developed a function that corrects for the energy-dependent detector response. The factors of this function were experimentally determined or calculated from tabulated values. (orig.) [de

  19. Investigation of CFRP in aerospace field and improvement of the molding accuracy by using autoclave

    Minamisawa, Takunori

    2017-07-01

    In recent years, CFRP (Carbon Fiber Reinforced Plastic) has come to be used in a wide range of industries such as sporting goods, fishing tackle and cars because it has a large number of advantages. In this situation, even the passenger aircraft industry also pays attention to the material. CFRP is an ideal material for airplanes because it has a lot of advantages such as light weight and strong, chemical resistance and corrosion resistance. Generally, autoclave is used for molding CFRP in the field of aerospace engineering. Autoclave is a machine that can mold a product by heating and pressurizing material in an evacuated bag. What is examined in this paper is an observation on handmade CFRP by a polarizing microscope. In addition, mechanical characteristics were investigated. Furthermore, an improvement of accuracy in CFRP molding using an autoclave is suggested from viewpoint of thermodynamics.

  20. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors.

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02.

  1. The use of imprecise processing to improve accuracy in weather and climate prediction

    Düben, Peter D., E-mail: dueben@atm.ox.ac.uk [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom); McNamara, Hugh [University of Oxford, Mathematical Institute (United Kingdom); Palmer, T.N. [University of Oxford, Atmospheric, Oceanic and Planetary Physics (United Kingdom)

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce

  2. Microbiogical data, but not procalcitonin improve the accuracy of the clinical pulmonary infection score.

    Jung, Boris; Embriaco, Nathalie; Roux, François; Forel, Jean-Marie; Demory, Didier; Allardet-Servent, Jérôme; Jaber, Samir; La Scola, Bernard; Papazian, Laurent

    2010-05-01

    Early and adequate treatment of ventilator-associated pneumonia (VAP) is mandatory to improve the outcome. The aim of this study was to evaluate, in medical ICU patients, the respective and combined impact of the Clinical Pulmonary Infection Score (CPIS), broncho-alveolar lavage (BAL) gram staining, endotracheal aspirate and a biomarker (procalcitonin) for the early diagnosis of VAP. Prospective, observational study A medical intensive care unit in a teaching hospital. Over an 8-month period, we prospectively included 57 patients suspected of having 86 episodes of VAP. The day of suspicion, a BAL as well as alveolar and serum procalcitonin determinations and evaluation of CPIS were performed. Of 86 BAL performed, 48 were considered positive (cutoff of 10(4) cfu ml(-1)). We found no differences in alveolar or serum procalcitonin between VAP and non-VAP patients. Including procalcitonin in the CPIS score did not increase its accuracy (55%) for the diagnosis of VAP. The best tests to predict VAP were modified CPIS (threshold at 6) combined with microbiological data. Indeed, both routinely twice weekly performed endotracheal aspiration at a threshold of 10(5) cfu ml(-1) and BAL gram staining improved pre-test diagnostic accuracy of VAP (77 and 66%, respectively). This study showed that alveolar procalcitonin performed by BAL does not help the clinician to identify VAP. It confirmed that serum procalcitonin is not an accurate marker of VAP. In contrast, microbiological resources available at the time of VAP suspicion (BAL gram staining, last available endotracheal aspirate) combined or not with CPIS are helpful in distinguishing VAP diagnosed by BAL from patients with a negative BAL.

  3. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy

    Peña, M. A.; Liao, R.; Brenning, A.

    2017-06-01

    This study assesses the potential of spectrotemporal indices derived from satellite image time series (SITS) to improve the classification accuracy of fruit-tree crops. Six major fruit-tree crop types in the Aconcagua Valley, Chile, were classified by applying various linear discriminant analysis (LDA) techniques on a Landsat-8 time series of nine images corresponding to the 2014-15 growing season. As features we not only used the complete spectral resolution of the SITS, but also all possible normalized difference indices (NDIs) that can be constructed from any two bands of the time series, a novel approach to derive features from SITS. Due to the high dimensionality of this "enhanced" feature set we used the lasso and ridge penalized variants of LDA (PLDA). Although classification accuracies yielded by the standard LDA applied on the full-band SITS were good (misclassification error rate, MER = 0.13), they were further improved by 23% (MER = 0.10) with ridge PLDA using the enhanced feature set. The most important bands to discriminate the crops of interest were mainly concentrated on the first two image dates of the time series, corresponding to the crops' greenup stage. Despite the high predictor weights provided by the red and near infrared bands, typically used to construct greenness spectral indices, other spectral regions were also found important for the discrimination, such as the shortwave infrared band at 2.11-2.19 μm, sensitive to foliar water changes. These findings support the usefulness of spectrotemporal indices in the context of SITS-based crop type classifications, which until now have been mainly constructed by the arithmetic combination of two bands of the same image date in order to derive greenness temporal profiles like those from the normalized difference vegetation index.

  4. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  5. Going Vertical To Improve the Accuracy of Atomic Force Microscopy Based Single-Molecule Force Spectroscopy.

    Walder, Robert; Van Patten, William J; Adhikari, Ayush; Perkins, Thomas T

    2018-01-23

    Single-molecule force spectroscopy (SMFS) is a powerful technique to characterize the energy landscape of individual proteins, the mechanical properties of nucleic acids, and the strength of receptor-ligand interactions. Atomic force microscopy (AFM)-based SMFS benefits from ongoing progress in improving the precision and stability of cantilevers and the AFM itself. Underappreciated is that the accuracy of such AFM studies remains hindered by inadvertently stretching molecules at an angle while measuring only the vertical component of the force and extension, degrading both measurements. This inaccuracy is particularly problematic in AFM studies using double-stranded DNA and RNA due to their large persistence length (p ≈ 50 nm), often limiting such studies to other SMFS platforms (e.g., custom-built optical and magnetic tweezers). Here, we developed an automated algorithm that aligns the AFM tip above the DNA's attachment point to a coverslip. Importantly, this algorithm was performed at low force (10-20 pN) and relatively fast (15-25 s), preserving the connection between the tip and the target molecule. Our data revealed large uncorrected lateral offsets for 100 and 650 nm DNA molecules [24 ± 18 nm (mean ± standard deviation) and 180 ± 110 nm, respectively]. Correcting this offset yielded a 3-fold improvement in accuracy and precision when characterizing DNA's overstretching transition. We also demonstrated high throughput by acquiring 88 geometrically corrected force-extension curves of a single individual 100 nm DNA molecule in ∼40 min and versatility by aligning polyprotein- and PEG-based protein-ligand assays. Importantly, our software-based algorithm was implemented on a commercial AFM, so it can be broadly adopted. More generally, this work illustrates how to enhance AFM-based SMFS by developing more sophisticated data-acquisition protocols.

  6. Including RNA secondary structures improves accuracy and robustness in reconstruction of phylogenetic trees

    Dandekar Thomas

    2010-01-01

    Full Text Available Abstract Background In several studies, secondary structures of ribosomal genes have been used to improve the quality of phylogenetic reconstructions. An extensive evaluation of the benefits of secondary structure, however, is lacking. Results This is the first study to counter this deficiency. We inspected the accuracy and robustness of phylogenetics with individual secondary structures by simulation experiments for artificial tree topologies with up to 18 taxa and for divergency levels in the range of typical phylogenetic studies. We chose the internal transcribed spacer 2 of the ribosomal cistron as an exemplary marker region. Simulation integrated the coevolution process of sequences with secondary structures. Additionally, the phylogenetic power of marker size duplication was investigated and compared with sequence and sequence-structure reconstruction methods. The results clearly show that accuracy and robustness of Neighbor Joining trees are largely improved by structural information in contrast to sequence only data, whereas a doubled marker size only accounts for robustness. Conclusions Individual secondary structures of ribosomal RNA sequences provide a valuable gain of information content that is useful for phylogenetics. Thus, the usage of ITS2 sequence together with secondary structure for taxonomic inferences is recommended. Other reconstruction methods as maximum likelihood, bayesian inference or maximum parsimony may equally profit from secondary structure inclusion. Reviewers This article was reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. Open peer review Reviewed by Shamil Sunyaev, Andrea Tanzer (nominated by Frank Eisenhaber and Eugene V. Koonin. For the full reviews, please go to the Reviewers' comments section.

  7. The use of imprecise processing to improve accuracy in weather and climate prediction

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-01-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  8. Improvements to robotics-inspired conformational sampling in rosetta.

    Amelie Stein

    Full Text Available To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.

  9. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    Patel, Prashant, E-mail: p.patel@bham.ac.uk [University of Birmingham, School of Cancer Sciences, Vincent Drive (United Kingdom); Rangarajan, Balaji; Mangat, Kamarjit, E-mail: kamarjit.mangat@uhb.nhs.uk, E-mail: kamarjit.mangat@nhs.net [University Hospital Birmingham NHS Trust, Department of Radiology (United Kingdom)

    2015-08-15

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree.

  10. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    Patel, Prashant; Rangarajan, Balaji; Mangat, Kamarjit

    2015-01-01

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree

  11. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  12. Improving Accuracy of Dempster-Shafer Theory Based Anomaly Detection Systems

    Ling Zou

    2014-07-01

    Full Text Available While the Dempster-Shafer theory of evidence has been widely used in anomaly detection, there are some issues with them. Dempster-Shafer theory of evidence trusts evidences equally which does not hold in distributed-sensor ADS. Moreover, evidences are dependent with each other sometimes which will lead to false alert. We propose improving by incorporating two algorithms. Features selection algorithm employs Gaussian Graphical Models to discover correlation between some candidate features. A group of suitable ADS were selected to detect and detection result were send to the fusion engine. Information gain is applied to set weight for every feature on Weights estimated algorithm. A weighted Dempster-Shafer theory of evidence combined the detection results to achieve a better accuracy. We evaluate our detection prototype through a set of experiments that were conducted with standard benchmark Wisconsin Breast Cancer Dataset and real Internet traffic. Evaluations on the Wisconsin Breast Cancer Dataset show that our prototype can find the correlation in nine features and improve the detection rate without affecting the false positive rate. Evaluations on Internet traffic show that Weights estimated algorithm can improve the detection performance significantly.

  13. Improvement in Interobserver Accuracy in Delineation of the Lumpectomy Cavity Using Fiducial Markers

    Shaikh, Talha; Chen Ting; Khan, Atif; Yue, Ning J.; Kearney, Thomas; Cohler, Alan; Haffty, Bruce G.; Goyal, Sharad

    2010-01-01

    Purpose: To determine, whether the presence of gold fiducial markers would improve the inter- and intraphysician accuracy in the delineation of the surgical cavity compared with a matched group of patients who did not receive gold fiducial markers in the setting of accelerated partial-breast irradiation (APBI). Methods and Materials: Planning CT images of 22 lumpectomy cavities were reviewed in a cohort of 22 patients; 11 patients received four to six gold fiducial markers placed at the time of surgery. Three physicians categorized the seroma cavity according to cavity visualization score criteria and delineated each of the 22 seroma cavities and the clinical target volume. Distance between centers of mass, percentage overlap, and average surface distance for all patients were assessed. Results: The mean seroma volume was 36.9 cm 3 and 34.2 cm 3 for fiducial patients and non-fiducial patients, respectively (p = ns). Fiducial markers improved the mean cavity visualization score, to 3.6 ± 1.0 from 2.5 ± 1.3 (p < 0.05). The mean distance between centers of mass, average surface distance, and percentage overlap for the seroma and clinical target volume were significantly improved in the fiducial marker patients as compared with the non-fiducial marker patients (p < 0.001). Conclusions: The placement of gold fiducial markers placed at the time of lumpectomy improves interphysician identification and delineation of the seroma cavity and clinical target volume. This has implications in radiotherapy treatment planning for accelerated partial-breast irradiation and for boost after whole-breast irradiation.

  14. Psychometric Properties and Diagnostic Accuracy of the Edinburgh Postnatal Depression Scale in a Sample of Iranian Women

    Gholam Reza Kheirabadi

    2012-03-01

    Full Text Available Background: Edinburgh Postnatal Depression Scale (EPDS has been used as a reliable screening tool for postpartum depression in many countries. This study aimed to assess the psychometric properties and diagnostic accuracy of the EPDS in a sample of Iranian women.Methods: Using stratified sampling 262 postpartum women (2 weeks-3 months after delivery were selected from urban and rural health center in the city of Isfahan. They were interviewed using EPDS and Hamilton depression rating scale (HDRS. Data were assessed using factor analysis, diagnosis analysis of receiver operating characteristic (ROC curve, Cronbach's alpha and Pearson correlation coefficient.Results: The age of then participants ranged 18-45 years (26.6±5.1. Based on a cut-off point of >13 for HDRS, 18.3% of the participants. The overall reliability (Cronbach's alpha of EPDS was 0.79. There was a significant correlation (r2=0.60, P value<0.01 between EPDS and HDRS. Two factor analysis showed that anhedonia and depression were two explanatory factors. At a cut-off point12 the sensitivity of the questionnaire was 78% (95% CI: 73%-83% and its specificity was 75% (95% CI: 72%-78%. Conclusion: The Persian version of the EPDS showed appropriate psychometric properties diagnostic accuracy index. It can be used by health system professionals for detection, assessment and treatment for mothers with post partum depression.

  15. Accuracy improvement of CT reconstruction using tree-structured filter bank

    Ueda, Kazuhiro; Morimoto, Hiroaki; Morikawa, Yoshitaka; Murakami, Junichi

    2009-01-01

    Accuracy improvement of 'CT reconstruction algorithm using TSFB (Tree-Structured Filter Bank)' that is high-speed CT reconstruction algorithm, was proposed. TSFB method could largely reduce the amount of computation in comparison with the CB (Convolution Backprojection) method, but it was the problem that an artifact occurred in a reconstruction image since reconstruction was performed with disregard to a signal out of the reconstruction domain in stage processing. Also the whole band filter being the component of a two-dimensional synthesis filter was IIR filter and then an artifact occurred at the end of the reconstruction image. In order to suppress these artifacts the proposed method enlarged the processing range by the TSFB method in the domain outside by the width control of the specimen line and line addition to the reconstruction domain outside. And, furthermore, to avoid increase of the amount of computation, the algorithm was proposed such as to decide the needed processing range depending on the number of steps processing with the TSFB and the degree of incline of filter, and then update the position and width of the specimen line to process the needed range. According to the simulation to realize a high-speed and highly accurate CT reconstruction in this way, the quality of the reconstruction image of the proposed method was improved in comparison with the TSFB method and got the same result with the CB method. (T. Tanaka)

  16. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  17. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. Published by Elsevier Inc.

  18. Improving diagnostic accuracy using agent-based distributed data mining system.

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  19. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-07

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Can use of an administrative database improve accuracy of hospital-reported readmission rates?

    Edgerton, James R; Herbert, Morley A; Hamman, Baron L; Ring, W Steves

    2018-05-01

    Readmission rates after cardiac surgery are being used as a quality indicator; they are also being collected by Medicare and are tied to reimbursement. Accurate knowledge of readmission rates may be difficult to achieve because patients may be readmitted to different hospitals. In our area, 81 hospitals share administrative claims data; 28 of these hospitals (from 5 different hospital systems) do cardiac surgery and share Society of Thoracic Surgeons (STS) clinical data. We used these 2 sources to compare the readmissions data for accuracy. A total of 45,539 STS records from January 2008 to December 2016 were matched with the hospital billing data records. Using the index visit as the start date, the billing records were queried for any subsequent in-patient visits for that patient. The billing records included date of readmission and hospital of readmission data and were compared with the data captured in the STS record. We found 1153 (2.5%) patients who had STS records that were marked "No" or "missing," but there were billing records that showed a readmission. The reported STS readmission rate of 4796 (10.5%) underreported the readmission rate by 2.5 actual percentage points. The true rate should have been 13.0%. Actual readmission rate was 23.8% higher than reported by the clinical database. Approximately 36% of readmissions were to a hospital that was a part of a different hospital system. It is important to know accurate readmission rates for quality improvement processes and institutional financial planning. Matching patient records to an administrative database showed that the clinical database may fail to capture many readmissions. Combining data with an administrative database can enhance accuracy of reporting. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  1. The design of visible system for improving the measurement accuracy of imaging points

    Shan, Qiu-sha; Li, Gang; Zeng, Luan; Liu, Kai; Yan, Pei-pei; Duan, Jing; Jiang, Kai

    2018-02-01

    It has a widely applications in robot vision and 3D measurement for binocular stereoscopic measurement technology. And the measure precision is an very important factor, especially in 3D coordination measurement, high measurement accuracy is more stringent to the distortion of the optical system. In order to improving the measurement accuracy of imaging points, to reducing the distortion of the imaging points, the optical system must be satisfied the requirement of extra low distortion value less than 0.1#65285;, a transmission visible optical lens was design, which has characteristic of telecentric beam path in image space, adopted the imaging model of binocular stereo vision, and imaged the drone at the finity distance. The optical system was adopted complex double Gauss structure, and put the pupil stop on the focal plane of the latter groups, maked the system exit pupil on the infinity distance, and realized telecentric beam path in image space. The system mainly optical parameter as follows: the system spectrum rangement is visible light wave band, the optical effective length is f '=30mm, the relative aperture is 1/3, and the fields of view is 21°. The final design results show that the RMS value of the spread spots of the optical lens in the maximum fields of view is 2.3μm, which is less than one pixel(3.45μm) the distortion value is less than 0.1%, the system has the advantage of extra low distortion value and avoids the latter image distortion correction; the proposed modulation transfer function of the optical lens is 0.58(@145 lp/mm), the imaging quality of the system is closed to the diffraction limited; the system has simply structure, and can satisfies the requirements of the optical indexes. Ultimately, based on the imaging model of binocular stereo vision was achieved to measuring the drone at the finity distance.

  2. The accuracy of endometrial sampling in women with postmenopausal bleeding: a systematic review and meta-analysis.

    van Hanegem, Nehalennia; Prins, Marileen M C; Bongers, Marlies Y; Opmeer, Brent C; Sahota, Daljit Singh; Mol, Ben Willem J; Timmermans, Anne

    2016-02-01

    Postmenopausal bleeding (PMB) can be the first sign of endometrial cancer. In case of thickened endometrium, endometrial sampling is often used in these women. In this systematic review, we studied the accuracy of endometrial sampling for the diagnoses of endometrial cancer, atypical hyperplasia and endometrial disease (endometrial pathology, including benign polyps). We systematically searched the literature for studies comparing the results of endometrial sampling in women with postmenopausal bleeding with two different reference standards: blind dilatation and curettage (D&C) and hysteroscopy with histology. We assessed the quality of the detected studies by the QUADAS-2 tool. For each included study, we calculated the fraction of women in whom endometrial sampling failed. Furthermore, we extracted numbers of cases of endometrial cancer, atypical hyperplasia and endometrial disease that were identified or missed by endometrial sampling. We detected 12 studies reporting on 1029 women with postmenopausal bleeding: five studies with dilatation and curettage (D&C) and seven studies with hysteroscopy as a reference test. The weighted sensitivity of endometrial sampling with D&C as a reference for the diagnosis of endometrial cancer was 100% (range 100-100%) and 92% (71-100) for the diagnosis of atypical hyperplasia. Only one study reported sensitivity for endometrial disease, which was 76%. When hysteroscopy was used as a reference, weighted sensitivities of endometrial sampling were 90% (range 50-100), 82% (range 56-94) and 39% (21-69) for the diagnosis of endometrial cancer, atypical hyperplasia and endometrial disease, respectively. For all diagnosis studied and the reference test used, specificity was 98-100%. The weighted failure rate of endometrial sampling was 11% (range 1-53%), while insufficient samples were found in 31% (range 7-76%). In these women with insufficient or failed samples, an endometrial (pre) cancer was found in 7% (range 0-18%). In women with

  3. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the

  4. Matrix effect on the detection limit and accuracy in total reflection X-ray fluorescence analysis of trace elements in environmental and biological samples

    Karjou, J.

    2007-01-01

    The effect of matrix contents on the detection limit of total reflection X-ray fluorescence analysis was experimentally investigated using a set of multielement standard solutions (500 ng/mL of each element) in variable concentrations of NH 4 NO 3 . It was found that high matrix concentration, i.e. 0.1-10% NH 4 NO 3 , had a strong effect on the detection limits for all investigated elements, whereas no effect was observed at lower matrix concentration, i.e. 0-0.1% NH 4 NO 3 . The effect of soil and blood sample masses on the detection limit was also studied. The results showed decreasing the detection limit (in concentration unit, μg/g) with increasing the sample mass. However, the detection limit increased (in mass unit, ng) with increasing sample mass. The optimal blood sample mass of ca. 200 μg was sufficient to improve the detection limit of Se determination by total reflection X-ray fluorescence. The capability of total reflection X-ray fluorescence to analyze different kinds of samples was discussed with respect to the accuracy and detection limits based on certified and reference materials. Direct analysis of unknown water samples from several sources was also presented in this work

  5. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  6. Improved optimum condition for recovery and measurement of 210Po in environmental samples

    Zal Uyun Wan Mahmood; Norfaizal Mohamed; Nik Azlin Nik Ariffin; Abdul Kadir Ishak

    2012-01-01

    An improved laboratory technique for measurement of polonium-210( 210 Po) in environmental samples has been developed in Radiochemistry and Environmental Laboratory (RAS), Malaysian Nuclear Agency. To further improve this technique, a study with the objectives to determine the optimum conditions for 210 Po deposition and; evaluate the accuracy and precision results for the determination of 210 Po in environmental samples was carried-out. Polonium-210 which is an alpha emitter obtained in acidic solution through total digestion and dissolution of samples has been efficiently plated onto one side of the silver disc in the spontaneous plating process for measurement of its alpha activity. The optimum conditions for deposition of 210 Po were achieved using hydrochloric acid (HCl) media at acidity of 0.5 M with the presence of 1.0 gram hydroxyl ammonium chloride and the plating temperature at 90 degree Celsius. The plating was carried out in 80 ml HCl solution (0.5 M) for 4 hours. The recorded recoveries obtained using 209 Po tracers in the CRM IAEA-385 and environmental samples were 85 % - 98% whereby the efficiency of the new technique is a distinct advantage over the existing techniques. Therefore, optimization of deposition parameters is a prime importance to achieve accuracy and precision results as well as economy and time saving. (author)

  7. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial

  8. Accuracy Enhancement of Raman Spectroscopy Using Complementary Laser-Induced Breakdown Spectroscopy (LIBS) with Geologically Mixed Samples.

    Choi, Soojin; Kim, Dongyoung; Yang, Junho; Yoh, Jack J

    2017-04-01

    Quantitative Raman analysis was carried out with geologically mixed samples that have various matrices. In order to compensate the matrix effect in Raman shift, laser-induced breakdown spectroscopy (LIBS) analysis was performed. Raman spectroscopy revealed the geological materials contained in the mixed samples. However, the analysis of a mixture containing different matrices was inaccurate due to the weak signal of the Raman shift, interference, and the strong matrix effect. On the other hand, the LIBS quantitative analysis of atomic carbon and calcium in mixed samples showed high accuracy. In the case of the calcite and gypsum mixture, the coefficient of determination of atomic carbon using LIBS was 0.99, while the signal using Raman was less than 0.9. Therefore, the geological composition of the mixed samples is first obtained using Raman and the LIBS-based quantitative analysis is then applied to the Raman outcome in order to construct highly accurate univariate calibration curves. The study also focuses on a method to overcome matrix effects through the two complementary spectroscopic techniques of Raman spectroscopy and LIBS.

  9. Evaluation of a new tear osmometer for repeatability and accuracy, using 0.5-microL (500-Nanoliter) samples.

    Yildiz, Elvin H; Fan, Vincent C; Banday, Hina; Ramanathan, Lakshmi V; Bitra, Ratna K; Garry, Eileen; Asbell, Penny A

    2009-07-01

    To evaluate the repeatability and accuracy of a new tear osmometer that measures the osmolality of 0.5-microL (500-nanoliter) samples. Four standardized solutions were tested with 0.5-microL (500-nanoliter) samples for repeatability of measurements and comparability to standardized technique. Two known standard salt solutions (290 mOsm/kg H2O, 304 mOsm/kg H2O), a normal artificial tear matrix sample (306 mOsm/kg H2O), and an abnormal artificial tear matrix sample (336 mOsm/kg H2O) were repeatedly tested (n = 20 each) for osmolality with use of the Advanced Instruments Model 3100 Tear Osmometer (0.5-microL [500-nanoliter] sample size) and the FDA-approved Advanced Instruments Model 3D2 Clinical Osmometer (250-microL sample size). Four standard solutions were used, with osmolality values of 290, 304, 306, and 336 mOsm/kg H2O. The respective precision data, including the mean and standard deviation, were: 291.8 +/- 4.4, 305.6 +/- 2.4, 305.1 +/- 2.3, and 336.4 +/- 2.2 mOsm/kg H2O. The percent recoveries for the 290 mOsm/kg H2O standard solution, the 304 mOsm/kg H2O reference solution, the normal value-assigned 306 mOsm/kg H2O sample, and the abnormal value-assigned 336 mOsm/kg H2O sample were 100.3, 100.2, 99.8, and 100.3 mOsm/kg H2O, respectively. The repeatability data are in accordance with data obtained on clinical osmometers with use of larger sample sizes. All 4 samples tested on the tear osmometer have osmolality values that correlate well to the clinical instrument method. The tear osmometer is a suitable instrument for testing the osmolality of microliter-sized samples, such as tears, and therefore may be useful in diagnosing, monitoring, and classifying tear abnormalities such as the severity of dry eye disease.

  10. Hematocrit correction does not improve glucose monitor accuracy in the assessment of neonatal hypoglycemia.

    Wang, Li; Sievenpiper, John L; de Souza, Russell J; Thomaz, Michele; Blatz, Susan; Grey, Vijaylaxmi; Fusch, Christoph; Balion, Cynthia

    2013-08-01

    The lack of accuracy of point of care (POC) glucose monitors has limited their use in the diagnosis of neonatal hypoglycemia. Hematocrit plays an important role in explaining discordant results. The objective of this study was to to assess the effect of hematocrit on the diagnostic performance of Abbott Precision Xceed Pro (PXP) and Nova StatStrip (StatStrip) monitors in neonates. All blood samples ordered for laboratory glucose measurement were analyzed using the PXP and StatStrip and compared with the laboratory analyzer (ABL 800 Blood Gas analyzer [ABL]). Acceptable error targets were ±15% for glucose monitoring and ±5% for diagnosis. A total of 307 samples from 176 neonates were analyzed. Overall, 90% of StatStrip and 75% of PXP values met the 15% error limit and 45% of StatStrip and 32% of PXP values met the 5% error limit. At glucose concentrations ≤4 mmol/L, 83% of StatStrip and 79% of PXP values met the 15% error limit, while 37% of StatStrip and 38% of PXP values met the 5% error limit. Hematocrit explained 7.4% of the difference between the PXP and ABL whereas it accounted for only 0.09% of the difference between the StatStrip and ABL. The ROC analysis showed the screening cut point with the best performance for identifying neonatal hypoglycemia was 3.2 mmol/L for StatStrip and 3.3 mmol/L for PXP. Despite a negligible hematocrit effect for the StatStrip, it did not achieve recommended error limits. The StatStrip and PXP glucose monitors remain suitable only for neonatal hypoglycemia screening with confirmation required from a laboratory analyzer.

  11. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  12. How could the replica method improve accuracy of performance assessment of channel coding?

    Kabashima, Yoshiyuki [Department of Computational Intelligence and Systems Science, Tokyo Institute of technology, Yokohama 226-8502 (Japan)], E-mail: kaba@dis.titech.ac.jp

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  13. Improving the surface metrology accuracy of optical profilers by using multiple measurements

    Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan

    2016-10-01

    The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.

  14. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    D’Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-01-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  15. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    D’Emilia, Giulio, E-mail: giulio.demilia@univaq.it; Di Gasbarro, David, E-mail: david.digasbarro@graduate.univaq.it; Gaspari, Antonella, E-mail: antonella.gaspari@graduate.univaq.it; Natale, Emanuela, E-mail: emanuela.natale@univaq.it [University of L’Aquila, Department of Industrial and Information Engineering and Economics (DIIIE), via G. Gronchi, 18, 67100 L’Aquila (Italy)

    2016-06-28

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  16. EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.

    Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin

    2018-04-24

    Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.

  17. Multi-scale hippocampal parcellation improves atlas-based segmentation accuracy

    Plassard, Andrew J.; McHugo, Maureen; Heckers, Stephan; Landman, Bennett A.

    2017-02-01

    Known for its distinct role in memory, the hippocampus is one of the most studied regions of the brain. Recent advances in magnetic resonance imaging have allowed for high-contrast, reproducible imaging of the hippocampus. Typically, a trained rater takes 45 minutes to manually trace the hippocampus and delineate the anterior from the posterior segment at millimeter resolution. As a result, there has been a significant desire for automated and robust segmentation of the hippocampus. In this work we use a population of 195 atlases based on T1-weighted MR images with the left and right hippocampus delineated into the head and body. We initialize the multi-atlas segmentation to a region directly around each lateralized hippocampus to both speed up and improve the accuracy of registration. This initialization allows for incorporation of nearly 200 atlases, an accomplishment which would typically involve hundreds of hours of computation per target image. The proposed segmentation results in a Dice similiarity coefficient over 0.9 for the full hippocampus. This result outperforms a multi-atlas segmentation using the BrainCOLOR atlases (Dice 0.85) and FreeSurfer (Dice 0.75). Furthermore, the head and body delineation resulted in a Dice coefficient over 0.87 for both structures. The head and body volume measurements also show high reproducibility on the Kirby 21 reproducibility population (R2 greater than 0.95, p develop a robust tool for measurement of the hippocampus and other temporal lobe structures.

  18. Incorporation of unique molecular identifiers in TruSeq adapters improves the accuracy of quantitative sequencing.

    Hong, Jungeui; Gresham, David

    2017-11-01

    Quantitative analysis of next-generation sequencing (NGS) data requires discriminating duplicate reads generated by PCR from identical molecules that are of unique origin. Typically, PCR duplicates are identified as sequence reads that align to the same genomic coordinates using reference-based alignment. However, identical molecules can be independently generated during library preparation. Misidentification of these molecules as PCR duplicates can introduce unforeseen biases during analyses. Here, we developed a cost-effective sequencing adapter design by modifying Illumina TruSeq adapters to incorporate a unique molecular identifier (UMI) while maintaining the capacity to undertake multiplexed, single-index sequencing. Incorporation of UMIs into TruSeq adapters (TrUMIseq adapters) enables identification of bona fide PCR duplicates as identically mapped reads with identical UMIs. Using TrUMIseq adapters, we show that accurate removal of PCR duplicates results in improved accuracy of both allele frequency (AF) estimation in heterogeneous populations using DNA sequencing and gene expression quantification using RNA-Seq.

  19. Accuracy improvement of SPACE code using the optimization for CHF subroutine

    Yang, Chang Keun; Kim, Yo Han; Park, Jong Eun; Ha, Sang Jun

    2010-01-01

    Typically, a subroutine to calculate the CHF (Critical Heat Flux) is loaded in code for safety analysis of nuclear power plant. CHF subroutine calculates CHF phenomenon using arbitrary condition (Temperature, pressure, flow rate, power, etc). When safety analysis for nuclear power plant is performed using major factor, CHF parameter is one of the most important factor. But the subroutines used in most codes, such as Biasi method, etc., estimate some different values from experimental data. Most CHF subroutines in the codes could predict only in their specification area, such as pressure, mass flow, void fraction, etc. Even though the most accurate CHF subroutine is used in the high quality nuclear safety analysis code, it is not assured that the valued predicted values by the subroutine are acceptable out of their application area. To overcome this hardship, various approaches to estimate the CHF have been examined during the code developing stage of SPACE. And the six sigma technique was adopted for the examination as mentioned this study. The objective of this study is to improvement of CHF prediction accuracy for nuclear power plant safety analysis code using the CHF database and Six Sigma technique. Through the study, it was concluded that the six sigma technique was useful to quantify the deviation of prediction values to experimental data and the implemented CHF prediction method in SPACE code had well-predict capabilities compared with those from other methods

  20. Improved mass-measurement accuracy using a PNB Load Cell Scale

    Suda, S.; Pontius, P.; Schoonover, R.

    1981-08-01

    The PNB Load Cell Scale is a Preloaded, Narrow-Band calibration mass comparator. It consists of (1) a frame and servo-mechanism that maintains a preload tension on the load cell until the load, an unknown mass, is sensed, and (2) a null-balance digital instrument that suppresses the cell response associated with the preload, thereby improving the precision and accuracy of the measurements. Ideally, the objects used to set the preload should be replica mass standards that closely approximate the density and mass of the unknowns. The advantages of the PNB scale are an expanded output signal over the range of interest which increases both the sensitivity and resolution, and minimizes the transient effects associated with loading of load cells. An area of immediate and practical application of this technique to nuclear material safeguards is the weighing of UF 6 cyliners where in-house mass standards are currently available and where the mass values are typically assigned on the basis of comparison weighings. Several prototypical versions of the PNB scale have been assembled at the US National Bureau of Standards. A description of the instrumentation, principles of measurements, and applications are presented in this paper

  1. Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.

    Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo

    2014-01-01

    The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.

  2. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-06

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty.

  3. Improving the accuracy of brain tumor surgery via Raman-based technology.

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W; Sunney Xie, X; Orringer, Daniel A

    2016-03-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors.

  4. Improvements and experience in the analysis of reprocessing samples

    Koch, L.; Cricchio, A.; Meester, R. de; Romkowski, M.; Wilhelmi, M.; Arenz, H.J.; Stijl, E. van der; Baeckmann, A. von

    1976-01-01

    Improvements in the analysis of input samples for reprocessing were obtained. To cope with the decomposition of reprocessing input solutions owling to the high radioactivity, an aluminium capsule technique was developed. A known amount of the dissolver solution was weighed into an aluminium can, dried, and the capsule was sealed. In this form, the sample could be stored over a long period and could be redissolved later for the analysis. The isotope correlation technique offers an attractive alternative for measuring the plutonium isotopic content in the dissolver solution. Moreover, this technique allows for consistency checks of analytical results. For this purpose, a data bank of correlated isotopic data is in use. To improve the efficiency of analytical work, four automatic instruments have been developed. The conditioning of samples for the U-Pu isotopic measurement was achieved by an automatic ion exchanger. A mass spectrometer, to which a high vacuum lock is connected, allows the automatic measurement of U-Pu samples. A process-computer controls the heating, focusing and scanning processes during the measurement and evaluates the data. To ease the data handling, alpha-spectrometry as well as a balance have been automated. (author)

  5. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  6. An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method

    Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.

    2015-01-01

    Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

  7. Improved automation of dissolved organic carbon sampling for organic-rich surface waters.

    Grayson, Richard P; Holden, Joseph

    2016-02-01

    In-situ UV-Vis spectrophotometers offer the potential for improved estimates of dissolved organic carbon (DOC) fluxes for organic-rich systems such as peatlands because they are able to sample and log DOC proxies automatically through time at low cost. In turn, this could enable improved total carbon budget estimates for peatlands. The ability of such instruments to accurately measure DOC depends on a number of factors, not least of which is how absorbance measurements relate to DOC and the environmental conditions. Here we test the ability of a S::can Spectro::lyser™ for measuring DOC in peatland streams with routinely high DOC concentrations. Through analysis of the spectral response data collected by the instrument we have been able to accurately measure DOC up to 66 mg L(-1), which is more than double the original upper calibration limit for this particular instrument. A linear regression modelling approach resulted in an accuracy >95%. The greatest accuracy was achieved when absorbance values for several different wavelengths were used at the same time in the model. However, an accuracy >90% was achieved using absorbance values for a single wavelength to predict DOC concentration. Our calculations indicated that, for organic-rich systems, in-situ measurement with a scanning spectrophotometer can improve fluvial DOC flux estimates by 6 to 8% compared with traditional sampling methods. Thus, our techniques pave the way for improved long-term carbon budget calculations from organic-rich systems such as peatlands. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Improved mixing and sampling systems for vitrification melter feeds

    Ebadian, M.A.

    1998-01-01

    This report summarizes the methods used and results obtained during the progress of the study of waste slurry mixing and sampling systems during fiscal year 1977 (FY97) at the Hemispheric Center for Environmental Technology (HCET) at Florida International University (FIU). The objective of this work is to determine optimal mixing configurations and operating conditions as well as improved sampling technology for defense waste processing facility (DWPF) waste melter feeds at US Department of Energy (DOE) sites. Most of the research on this project was performed experimentally by using a tank mixing configuration with different rotating impellers. The slurry simulants for the experiments were prepared in-house based on the properties of the DOE sites' typical waste slurries. A sampling system was designed to withdraw slurry from the mixing tank. To obtain insight into the waste mixing process, the slurry flow in the mixing tank was also simulated numerically by applying computational fluid dynamics (CFD) methods. The major parameters investigated in both the experimental and numerical studies included power consumption of mixer, mixing time to reach slurry uniformity, slurry type, solids concentration, impeller type, impeller size, impeller rotating speed, sampling tube size, and sampling velocities. Application of the results to the DWPF melter feed preparation process will enhance and modify the technical base for designing slurry transportation equipment and pipeline systems. These results will also serve as an important reference for improving waste slurry mixing performance and melter operating conditions. These factors will contribute to an increase in the capability of the vitrification process and the quality of the waste glass

  9. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  10. Hybrid Indoor-Based WLAN-WSN Localization Scheme for Improving Accuracy Based on Artificial Neural Network

    Zahid Farid

    2016-01-01

    Full Text Available In indoor environments, WiFi (RSS based localization is sensitive to various indoor fading effects and noise during transmission, which are the main causes of localization errors that affect its accuracy. Keeping in view those fading effects, positioning systems based on a single technology are ineffective in performing accurate localization. For this reason, the trend is toward the use of hybrid positioning systems (combination of two or more wireless technologies in indoor/outdoor localization scenarios for getting better position accuracy. This paper presents a hybrid technique to implement indoor localization that adopts fingerprinting approaches in both WiFi and Wireless Sensor Networks (WSNs. This model exploits machine learning, in particular Artificial Natural Network (ANN techniques, for position calculation. The experimental results show that the proposed hybrid system improved the accuracy, reducing the average distance error to 1.05 m by using ANN. Applying Genetic Algorithm (GA based optimization technique did not incur any further improvement to the accuracy. Compared to the performance of GA optimization, the nonoptimized ANN performed better in terms of accuracy, precision, stability, and computational time. The above results show that the proposed hybrid technique is promising for achieving better accuracy in real-world positioning applications.

  11. Improving accuracy for identifying related PubMed queries by an integrated approach.

    Lu, Zhiyong; Wilbur, W John

    2009-10-01

    PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users' search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments.

  12. Improving the Accuracy of NMR Structures of Large Proteins Using Pseudocontact Shifts as Long-Range Restraints

    Gaponenko, Vadim [National Cancer Institute, Structural Biophysics Laboratory (United States); Sarma, Siddhartha P. [Indian Institute of Science, Molecular Biophysics Unit (India); Altieri, Amanda S. [National Cancer Institute, Structural Biophysics Laboratory (United States); Horita, David A. [Wake Forest University School of Medicine, Department of Biochemistry (United States); Li, Jess; Byrd, R. Andrew [National Cancer Institute, Structural Biophysics Laboratory (United States)], E-mail: rabyrd@ncifcrf.gov

    2004-03-15

    We demonstrate improved accuracy in protein structure determination for large ({>=}30 kDa), deuterated proteins (e.g. STAT4{sub NT}) via the combination of pseudocontact shifts for amide and methyl protons with the available NOEs in methyl-protonated proteins. The improved accuracy is cross validated by Q-factors determined from residual dipolar couplings measured as a result of magnetic susceptibility alignment. The paramagnet is introduced via binding to thiol-reactive EDTA, and multiple sites can be serially engineered to obtain data from alternative orientations of the paramagnetic anisotropic susceptibility tensor. The technique is advantageous for systems where the target protein has strong interactions with known alignment media.

  13. Improving the Stability and Accuracy of Power Hardware-in-the-Loop Simulation Using Virtual Impedance Method

    Xiaoming Zha

    2016-11-01

    Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.

  14. Increasing fMRI sampling rate improves Granger causality estimates.

    Fa-Hsuan Lin

    Full Text Available Estimation of causal interactions between brain areas is necessary for elucidating large-scale functional brain networks underlying behavior and cognition. Granger causality analysis of time series data can quantitatively estimate directional information flow between brain regions. Here, we show that such estimates are significantly improved when the temporal sampling rate of functional magnetic resonance imaging (fMRI is increased 20-fold. Specifically, healthy volunteers performed a simple visuomotor task during blood oxygenation level dependent (BOLD contrast based whole-head inverse imaging (InI. Granger causality analysis based on raw InI BOLD data sampled at 100-ms resolution detected the expected causal relations, whereas when the data were downsampled to the temporal resolution of 2 s typically used in echo-planar fMRI, the causality could not be detected. An additional control analysis, in which we SINC interpolated additional data points to the downsampled time series at 0.1-s intervals, confirmed that the improvements achieved with the real InI data were not explainable by the increased time-series length alone. We therefore conclude that the high-temporal resolution of InI improves the Granger causality connectivity analysis of the human brain.

  15. Temporal aggregation of migration counts can improve accuracy and precision of trends

    Tara L. Crewe

    2016-12-01

    Full Text Available Temporal replicate counts are often aggregated to improve model fit by reducing zero-inflation and count variability, and in the case of migration counts collected hourly throughout a migration, allows one to ignore nonindependence. However, aggregation can represent a loss of potentially useful information on the hourly or seasonal distribution of counts, which might impact our ability to estimate reliable trends. We simulated 20-year hourly raptor migration count datasets with known rate of change to test the effect of aggregating hourly counts to daily or annual totals on our ability to recover known trend. We simulated data for three types of species, to test whether results varied with species abundance or migration strategy: a commonly detected species, e.g., Northern Harrier, Circus cyaneus; a rarely detected species, e.g., Peregrine Falcon, Falco peregrinus; and a species typically counted in large aggregations with overdispersed counts, e.g., Broad-winged Hawk, Buteo platypterus. We compared accuracy and precision of estimated trends across species and count types (hourly/daily/annual using hierarchical models that assumed a Poisson, negative binomial (NB or zero-inflated negative binomial (ZINB count distribution. We found little benefit of modeling zero-inflation or of modeling the hourly distribution of migration counts. For the rare species, trends analyzed using daily totals and an NB or ZINB data distribution resulted in a higher probability of detecting an accurate and precise trend. In contrast, trends of the common and overdispersed species benefited from aggregation to annual totals, and for the overdispersed species in particular, trends estimating using annual totals were more precise, and resulted in lower probabilities of estimating a trend (1 in the wrong direction, or (2 with credible intervals that excluded the true trend, as compared with hourly and daily counts.

  16. Visual control improves the accuracy of hand positioning in Huntington’s disease

    Emilia J. Sitek

    2017-08-01

    Full Text Available Background: The study aimed at demonstrating dependence of visual feedback during hand and finger positioning task performance among Huntington’s disease patients in comparison to patients with Parkinson’s disease and cervical dystonia. Material and methods: Eighty-nine patients participated in the study (23 with Huntington’s disease, 25 with Parkinson’s disease with dyskinesias, 21 with Parkinson’s disease without dyskinesias, and 20 with cervical dystonia, scoring ≥20 points on Mini-Mental State Examination in order to assure comprehension of task instructions. Neurological examination comprised of the motor section from the Unified Huntington’s Disease Rating Scale for Huntington’s disease, the Unified Parkinson’s Disease Rating Scale Part II–IV for Parkinson’s disease and the Toronto Western Spasmodic Torticollis Rating Scale for cervical dystonia. In order to compare hand position accuracy under visually controlled and blindfolded conditions, the patient imitated each of the 10 examiner’s hand postures twice, once under the visual control condition and once with no visual feedback provided. Results: Huntington’s disease patients imitated examiner’s hand positions less accurately under blindfolded condition in comparison to Parkinson’s disease without dyskinesias and cervical dystonia participants. Under visually controlled condition there were no significant inter-group differences. Conclusions: Huntington’s disease patients exhibit higher dependence on visual feedback while performing motor tasks than Parkinson’s disease and cervical dystonia patients. Possible improvement of movement precision in Huntington’s disease with the use of visual cues could be potentially useful in the patients’ rehabilitation.

  17. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  18. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    Kyuman Lee

    2016-08-01

    Full Text Available The airborne relay-based positioning system (ARPS, which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference

  19. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  20. IMPROVING THE POSITIONING ACCURACY OF TRAIN ON THE APPROACH SECTION TO THE RAILWAY CROSSING

    V. I. Havryliuk

    2016-02-01

    Full Text Available Purpose. In the paper it is necessary to analyze possibility of improving the positioning accuracy of train on the approach section to crossing for traffic safety control at railway crossings. Methodology. Researches were performed using developed mathematical model, describing dependence of the input impedance of the coded and audio frequency track circuits on a train coordinate at various values of ballast isolation resistances and for all usable frequencies. Findings. The paper presents the developed mathematical model, describing dependence of the input impedance of the coded and audio-frequency track circuits on the train coordinate at various values of ballast isolation resistances and for all frequencies used in track circuits. The relative error determination of train coordinate by input impedance caused by variation of the ballast isolation resistance for the coded track circuits was investigated. The values of relative error determination of train coordinate can achieve up to 40-50 % and these facts do not allow using this method directly for coded track circuits. For short audio frequency track circuits on frequencies of continuous cab signaling (25, 50 Hz the relative error does not exceed acceptable values, this allow using the examined method for determination of train location on the approach section to railway crossing. Originality. The developed mathematical model allowed determination of the error dependence of train coordinate by using input impedance of the track circuit for coded and audio-frequency track circuits at various frequencies of the signal current and at different ballast isolation resistances. Practical value. The authors proposethe method for train location determination on approach section to the crossing, equipped with audio-frequency track circuits, which is a combination of discrete and continuous monitoring of the train location.

  1. Accuracy improvement of dataflow analysis for cyclic stream processing applications scheduled by static priority preemptive schedulers

    Kurtin, Philip Sebastian; Hausmans, J.P.H.M.; Geuns, S.J.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Stream processing applications executed on embedded multiprocessor systems regularly contain cyclic data dependencies due to the presence of feedback loops and bounded FIFO buffers. Dataflow modeling is suitable for the temporal analysis of such applications. However, the accuracy can be

  2. Improved phylogenomic taxon sampling noticeably affects nonbilaterian relationships.

    Pick, K S; Philippe, H; Schreiber, F; Erpenbeck, D; Jackson, D J; Wrede, P; Wiens, M; Alié, A; Morgenstern, B; Manuel, M; Wörheide, G

    2010-09-01

    Despite expanding data sets and advances in phylogenomic methods, deep-level metazoan relationships remain highly controversial. Recent phylogenomic analyses depart from classical concepts in recovering ctenophores as the earliest branching metazoan taxon and propose a sister-group relationship between sponges and cnidarians (e.g., Dunn CW, Hejnol A, Matus DQ, et al. (18 co-authors). 2008. Broad phylogenomic sampling improves resolution of the animal tree of life. Nature 452:745-749). Here, we argue that these results are artifacts stemming from insufficient taxon sampling and long-branch attraction (LBA). By increasing taxon sampling from previously unsampled nonbilaterians and using an identical gene set to that reported by Dunn et al., we recover monophyletic Porifera as the sister group to all other Metazoa. This suggests that the basal position of the fast-evolving Ctenophora proposed by Dunn et al. was due to LBA and that broad taxon sampling is of fundamental importance to metazoan phylogenomic analyses. Additionally, saturation in the Dunn et al. character set is comparatively high, possibly contributing to the poor support for some nonbilaterian nodes.

  3. Improvement of the mechanical properties of reinforced aluminum foam samples

    Formisano, A.; Barone, A.; Carrino, L.; De Fazio, D.; Langella, A.; Viscusi, A.; Durante, M.

    2018-05-01

    Closed-cell aluminum foam has attracted increasing attention due to its very interesting properties, thanks to which it is expected to be used as both structural and functional material. A research challenge is the improvement of the mechanical properties of foam-based structures adopting a reinforced approach that does not compromise their lightness. Consequently, the aim of this research is the fabrication of enhanced aluminum foam samples without significantly increasing their original weight. In this regard, cylindrical samples with a core of closed-cell aluminum foam and a skin of fabrics and grids of different materials were fabricated in a one step process and were mechanically characterized, in order to investigate their behaviour and to compare their mechanical properties to the ones of the traditional foam.

  4. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  5. High-throughput microsatellite genotyping in ecology: improved accuracy, efficiency, standardization and success with low-quantity and degraded DNA.

    De Barba, M; Miquel, C; Lobréaux, S; Quenette, P Y; Swenson, J E; Taberlet, P

    2017-05-01

    Microsatellite markers have played a major role in ecological, evolutionary and conservation research during the past 20 years. However, technical constrains related to the use of capillary electrophoresis and a recent technological revolution that has impacted other marker types have brought to question the continued use of microsatellites for certain applications. We present a study for improving microsatellite genotyping in ecology using high-throughput sequencing (HTS). This approach entails selection of short markers suitable for HTS, sequencing PCR-amplified microsatellites on an Illumina platform and bioinformatic treatment of the sequence data to obtain multilocus genotypes. It takes advantage of the fact that HTS gives direct access to microsatellite sequences, allowing unambiguous allele identification and enabling automation of the genotyping process through bioinformatics. In addition, the massive parallel sequencing abilities expand the information content of single experimental runs far beyond capillary electrophoresis. We illustrated the method by genotyping brown bear samples amplified with a multiplex PCR of 13 new microsatellite markers and a sex marker. HTS of microsatellites provided accurate individual identification and parentage assignment and resulted in a significant improvement of genotyping success (84%) of faecal degraded DNA and costs reduction compared to capillary electrophoresis. The HTS approach holds vast potential for improving success, accuracy, efficiency and standardization of microsatellite genotyping in ecological and conservation applications, especially those that rely on profiling of low-quantity/quality DNA and on the construction of genetic databases. We discuss and give perspectives for the implementation of the method in the light of the challenges encountered in wildlife studies. © 2016 John Wiley & Sons Ltd.

  6. Sample preparation combined with electroanalysis to improve simultaneous determination of antibiotics in animal derived food samples.

    da Silva, Wesley Pereira; de Oliveira, Luiz Henrique; Santos, André Luiz Dos; Ferreira, Valdir Souza; Trindade, Magno Aparecido Gonçalves

    2018-06-01

    A procedure based on liquid-liquid extraction (LLE) and phase separation using magnetically stirred salt-induced high-temperature liquid-liquid extraction (PS-MSSI-HT-LLE) was developed to extract and pre-concentrate ciprofloxacin (CIPRO) and enrofloxacin (ENRO) from animal food samples before electroanalysis. Firstly, simple LLE was used to extract the fluoroquinolones (FQs) from animal food samples, in which dilution was performed to reduce interference effects to below a tolerable threshold. Then, adapted PS-MSSI-HT-LLE protocols allowed re-extraction and further pre-concentration of target analytes in the diluted acid samples for simultaneous electrochemical quantification at low concentration levels. To improve the peak separation, in simultaneous detection, a baseline-corrected second-order derivative approach was processed. These approaches allowed quantification of target FQs from animal food samples spiked at levels of 0.80 to 2.00 µmol L -1 in chicken meat, with recovery values always higher than 80.5%, as well as in milk samples spiked at 4.00 µmol L -1 , with recovery values close to 70.0%. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Improvement in the accuracy of flux measurement of radio sources by exploiting an arithmetic pattern in photon bunching noise

    Lieu, Richard

    2018-01-01

    A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.

  8. Improvement in the Accuracy of Flux Measurement of Radio Sources by Exploiting an Arithmetic Pattern in Photon Bunching Noise

    Lieu, Richard [Department of Physics, University of Alabama, Huntsville, AL 35899 (United States)

    2017-07-20

    A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.

  9. Improving the Classification Accuracy for Near-Infrared Spectroscopy of Chinese Salvia miltiorrhiza Using Local Variable Selection

    Lianqing Zhu

    2018-01-01

    Full Text Available In order to improve the classification accuracy of Chinese Salvia miltiorrhiza using near-infrared spectroscopy, a novel local variable selection strategy is thus proposed. Combining the strengths of the local algorithm and interval partial least squares, the spectra data have firstly been divided into several pairs of classes in sample direction and equidistant subintervals in variable direction. Then, a local classification model has been built, and the most proper spectral region has been selected based on the new evaluation criterion considering both classification error rate and best predictive ability under the leave-one-out cross validation scheme for each pair of classes. Finally, each observation can be assigned to belong to the class according to the statistical analysis of classification results of the local classification model built on selected variables. The performance of the proposed method was demonstrated through near-infrared spectra of cultivated or wild Salvia miltiorrhiza, which are collected from 8 geographical origins in 5 provinces of China. For comparison, soft independent modelling of class analogy and partial least squares discriminant analysis methods are, respectively, employed as the classification model. Experimental results showed that classification performance of the classification model with local variable selection was obvious better than that without variable selection.

  10. Improved sampling and analysis of images in corneal confocal microscopy.

    Schaldemose, E L; Fontain, F I; Karlsson, P; Nyengaard, J R

    2017-10-01

    Corneal confocal microscopy (CCM) is a noninvasive clinical method to analyse and quantify corneal nerve fibres in vivo. Although the CCM technique is in constant progress, there are methodological limitations in terms of sampling of images and objectivity of the nerve quantification. The aim of this study was to present a randomized sampling method of the CCM images and to develop an adjusted area-dependent image analysis. Furthermore, a manual nerve fibre analysis method was compared to a fully automated method. 23 idiopathic small-fibre neuropathy patients were investigated using CCM. Corneal nerve fibre length density (CNFL) and corneal nerve fibre branch density (CNBD) were determined in both a manual and automatic manner. Differences in CNFL and CNBD between (1) the randomized and the most common sampling method, (2) the adjusted and the unadjusted area and (3) the manual and automated quantification method were investigated. The CNFL values were significantly lower when using the randomized sampling method compared to the most common method (p = 0.01). There was not a statistical significant difference in the CNBD values between the randomized and the most common sampling method (p = 0.85). CNFL and CNBD values were increased when using the adjusted area compared to the standard area. Additionally, the study found a significant increase in the CNFL and CNBD values when using the manual method compared to the automatic method (p ≤ 0.001). The study demonstrated a significant difference in the CNFL values between the randomized and common sampling method indicating the importance of clear guidelines for the image sampling. The increase in CNFL and CNBD values when using the adjusted cornea area is not surprising. The observed increases in both CNFL and CNBD values when using the manual method of nerve quantification compared to the automatic method are consistent with earlier findings. This study underlines the importance of improving the analysis of the

  11. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  12. Improved grand canonical sampling of vapour-liquid transitions.

    Wilding, Nigel B

    2016-10-19

    Simulation within the grand canonical ensemble is the method of choice for accurate studies of first order vapour-liquid phase transitions in model fluids. Such simulations typically employ sampling that is biased with respect to the overall number density in order to overcome the free energy barrier associated with mixed phase states. However, at low temperature and for large system size, this approach suffers a drastic slowing down in sampling efficiency. The culprits are geometrically induced transitions (stemming from the periodic boundary conditions) which involve changes in droplet shape from sphere to cylinder and cylinder to slab. Since the overall number density does not discriminate sufficiently between these shapes, it fails as an order parameter for biasing through the transitions. Here we report two approaches to ameliorating these difficulties. The first introduces a droplet shape based order parameter that generates a transition path from vapour to slab states for which spherical and cylindrical droplets are suppressed. The second simply biases with respect to the number density in a tetragonal subvolume of the system. Compared to the standard approach, both methods offer improved sampling, allowing estimates of coexistence parameters and vapor-liquid surface tension for larger system sizes and lower temperatures.

  13. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  14. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  15. Using Language Sample Analysis in Clinical Practice: Measures of Grammatical Accuracy for Identifying Language Impairment in Preschool and School-Aged Children.

    Eisenberg, Sarita; Guo, Ling-Yu

    2016-05-01

    This article reviews the existing literature on the diagnostic accuracy of two grammatical accuracy measures for differentiating children with and without language impairment (LI) at preschool and early school age based on language samples. The first measure, the finite verb morphology composite (FVMC), is a narrow grammatical measure that computes children's overall accuracy of four verb tense morphemes. The second measure, percent grammatical utterances (PGU), is a broader grammatical measure that computes children's accuracy in producing grammatical utterances. The extant studies show that FVMC demonstrates acceptable (i.e., 80 to 89% accurate) to good (i.e., 90% accurate or higher) diagnostic accuracy for children between 4;0 (years;months) and 6;11 in conversational or narrative samples. In contrast, PGU yields acceptable to good diagnostic accuracy for children between 3;0 and 8;11 regardless of sample types. Given the diagnostic accuracy shown in the literature, we suggest that FVMC and PGU can be used as one piece of evidence for identifying children with LI in assessment when appropriate. However, FVMC or PGU should not be used as therapy goals directly. Instead, when children are low in FVMC or PGU, we suggest that follow-up analyses should be conducted to determine the verb tense morphemes or grammatical structures that children have difficulty with. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. Improved accuracy in the estimation of blood velocity vectors using matched filtering

    Jensen, Jørgen Arendt; Gori, P.

    2000-01-01

    the flow and the ultrasound beam (30, 45, 60, and 90 degrees). The parabolic flow has a peak velocity of 0.5 m/s and the pulse repetition frequency is 3.5 kHz. Simulating twenty emissions and calculating the cross-correlation using four pulse-echo lines for each estimate, the parabolic flow profile...... is found with a standard deviation of 0.014 m/s at 45 degrees (corresponding to an accuracy of 2.8%) and 0.022 m/s (corresponding to an accuracy of 4.4%) at 90 degrees, which is transverse to the ultrasound beam....

  17. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  18. Information transmission via movement behaviour improves decision accuracy in human groups

    Clément, R.J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, R.H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate

  19. Information transmission via movement behaviour improves decision accuracy in human groups

    Clément, Romain J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, Ralf H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate information

  20. Bureau of Indian Affairs Schools: New Facilities Management Information System Promising, but Improved Data Accuracy Needed.

    General Accounting Office, Washington, DC.

    A General Accounting Office (GAO) study evaluated the Bureau of Indian Affairs' (BIA) new facilities management information system (FMIS). Specifically, the study examined whether the new FMIS addresses the old system's weaknesses and meets BIA's management needs, whether BIA has finished validating the accuracy of data transferred from the old…

  1. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  2. Exploiting Deep Matching and SAR Data for the Geo-Localization Accuracy Improvement of Optical Satellite Images

    Nina Merkle

    2017-06-01

    Full Text Available Improving the geo-localization of optical satellite images is an important pre-processing step for many remote sensing tasks like monitoring by image time series or scene analysis after sudden events. These tasks require geo-referenced and precisely co-registered multi-sensor data. Images captured by the high resolution synthetic aperture radar (SAR satellite TerraSAR-X exhibit an absolute geo-location accuracy within a few decimeters. These images represent therefore a reliable source to improve the geo-location accuracy of optical images, which is in the order of tens of meters. In this paper, a deep learning-based approach for the geo-localization accuracy improvement of optical satellite images through SAR reference data is investigated. Image registration between SAR and optical images requires few, but accurate and reliable matching points. These are derived from a Siamese neural network. The network is trained using TerraSAR-X and PRISM image pairs covering greater urban areas spread over Europe, in order to learn the two-dimensional spatial shifts between optical and SAR image patches. Results confirm that accurate and reliable matching points can be generated with higher matching accuracy and precision with respect to state-of-the-art approaches.

  3. Improving the Accuracy of Planet Occurrence Rates from Kepler Using Approximate Bayesian Computation

    Hsu, Danley C.; Ford, Eric B.; Ragozzine, Darin; Morehead, Robert C.

    2018-05-01

    We present a new framework to characterize the occurrence rates of planet candidates identified by Kepler based on hierarchical Bayesian modeling, approximate Bayesian computing (ABC), and sequential importance sampling. For this study, we adopt a simple 2D grid in planet radius and orbital period as our model and apply our algorithm to estimate occurrence rates for Q1–Q16 planet candidates orbiting solar-type stars. We arrive at significantly increased planet occurrence rates for small planet candidates (R p 80 day) compared to the rates estimated by the more common inverse detection efficiency method (IDEM). Our improved methodology estimates that the occurrence rate density of small planet candidates in the habitable zone of solar-type stars is {1.6}-0.5+1.2 per factor of 2 in planet radius and orbital period. Additionally, we observe a local minimum in the occurrence rate for strong planet candidates marginalized over orbital period between 1.5 and 2 R ⊕ that is consistent with previous studies. For future improvements, the forward modeling approach of ABC is ideally suited to incorporating multiple populations, such as planets, astrophysical false positives, and pipeline false alarms, to provide accurate planet occurrence rates and uncertainties. Furthermore, ABC provides a practical statistical framework for answering complex questions (e.g., frequency of different planetary architectures) and providing sound uncertainties, even in the face of complex selection effects, observational biases, and follow-up strategies. In summary, ABC offers a powerful tool for accurately characterizing a wide variety of astrophysical populations.

  4. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  5. Introducing radiology report checklists among residents: adherence rates when suggesting versus requiring their use and early experience in improving accuracy.

    Powell, Daniel K; Lin, Eaton; Silberzweig, James E; Kagetsu, Nolan J

    2014-03-01

    To retrospectively compare resident adherence to checklist-style structured reporting for maxillofacial computed tomography (CT) from the emergency department (when required vs. suggested between two programs). To compare radiology resident reporting accuracy before and after introduction of the structured report and assess its ability to decrease the rate of undetected pathology. We introduced a reporting checklist for maxillofacial CT into our dictation software without specific training, requiring it at one program and suggesting it at another. We quantified usage among residents and compared reporting accuracy, before and after counting and categorizing faculty addenda. There was no significant change in resident accuracy in the first few months, with residents acting as their own controls (directly comparing performance with and without the checklist). Adherence to the checklist at program A (where it originated and was required) was 85% of reports compared to 9% of reports at program B (where it was suggested). When using program B as a secondary control, there was no significant difference in resident accuracy with or without using the checklist (comparing different residents using the checklist to those not using the checklist). Our results suggest that there is no automatic value of checklists for improving radiology resident reporting accuracy. They also suggest the importance of focused training, checklist flexibility, and a period of adjustment to a new reporting style. Mandatory checklists were readily adopted by residents but not when simply suggested. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  6. Diagnostic Accuracy: The Wellspring of EBVM Success, and How We Can Improve It

    David Mills

    2017-09-01

    Full Text Available Therapy and prognosis are entailed by the diagnosis: the holistic success of the EBVM approach there­fore firmly and critically rests on diagnostic accuracy.Unfortunately, medical professionals do not appear to be very accurate with diagnoses. In human medi­cine, there is 30-50% discordance reported between doctors’ ante- (presumptive and post-mortem (defin­itive diagnoses, with no significant change in the last 100 years (Goldberg et al 2002. This is attenuated by attaching a degree of certainty – ‘very certain’ shows 16% discordance, ‘probable’ 33% and ‘uncertain’ 50% – and some body systems are more difficult (e.g. respiratory than others (Shojania et al 2002; Sing­ton and Cottrell 2002.Veterinary surgeons do not perform much better, although it is a chronically under-researched area. The single study that exists – from a respected referral institution – shows discordance between ante- and post-mortem diagnoses of ranging from 15% (oncology to 45% (ECC, with internal medicine (44%, neu­rology (35%, surgery (33% and cardiology (21% lying in between (Kent et al 2004. Incorrect diagnoses are therefore common; the potential for subsequent incorrect or harmful therapy and/or prognosis is great; the quality of interventional evidence is immaterial if the wrong disease is being treated.How can we do better? Human EBM shows that technology, big data and further evidence does not guar­antee improvement; these are unlikely realisations for EBVM in the near future in any case. The answer may lie in the fields of psychology and social science. Studies indicate that diagnostic success may rest largely with the individual: expert clinicians consistently perform better. But how?Experts are marked out by the use of ‘illness scripts’, which are mental knowledge networks against which the presenting patient – history, signs, clinical data – are checked and hypotheses entertained or refuted until (with the addition of more

  7. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  8. Fission product model for BWR analysis with improved accuracy in high burnup

    Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira

    1998-01-01

    A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)

  9. Improved Accuracy of Density Functional Theory Calculations for CO2 Reduction and Metal-Air Batteries

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Density functional theory (DFT) calculations have greatly contributed to the atomic level understanding of electrochemical reactions. However, in some cases, the accuracy can be prohibitively low for a detailed understanding of, e.g. reaction mechanisms. Two cases are examined here, i.e. the elec......Density functional theory (DFT) calculations have greatly contributed to the atomic level understanding of electrochemical reactions. However, in some cases, the accuracy can be prohibitively low for a detailed understanding of, e.g. reaction mechanisms. Two cases are examined here, i.......47 eV and 0.17 eV using metals as reference. The presented approach for error identification is expected to be applicable to a very broad range of systems. References: [1] A. A. Peterson, F. Abild-Pedersen, F. Studt, J. Rossmeisl, and J. K. Nørskov, Energy Environ. Sci., 3,1311 (2010) [2] F. Studt, F...

  10. Application of Mensuration Technology to Improve the Accuracy of Field Artillery Firing Unit Location

    2013-12-13

    8 U.S. Army Field Artillery Operations ............................................................................ 8 Geodesy ...Experts in this field of study have a full working knowledge of geodesy and the theory that allows mensuration to surpass the level of accuracy achieved...desired. (2) Fire that is intended to achieve the desired result on target.”6 Geodesy : “that branch of applied mathematics which determines by observation

  11. Measuring Personality in Context: Improving Predictive Accuracy in Selection Decision Making

    Hoffner, Rebecca Ann

    2009-01-01

    This study examines the accuracy of a context-sensitive (i.e., goal dimensions) measure of personality compared to a traditional measure of personality (NEO-PI-R) and generalized self-efficacy (GSE) to predict variance in task performance. The goal dimensions measure takes a unique perspective in the conceptualization of personality. While traditional measures differentiate within person and collapse across context (e.g., Big Five), the goal dimensions measure employs a hierarchical structure...

  12. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    Ntaios, G.; Papavasileiou, V.; Faouzi, M.; Vanacker, P.; Wintermark, M.; Michel, P.

    2014-01-01

    BACKGROUND: The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. AIM: The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. METHODS: All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model o...

  13. Accuracy and impact of spatial aids based upon satellite enumeration to improve indoor residual spraying spatial coverage.

    Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A

    2018-02-23

    Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from

  14. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    Groselj, C.; Kukar, M.

    2002-01-01

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  15. Travel-time source-specific station correction improves location accuracy

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  16. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  17. Hybrid Brain-Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review.

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  18. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  19. An index with improved diagnostic accuracy for the diagnosis of Crohn's disease derived from the Lennard-Jones criteria.

    Reinisch, S; Schweiger, K; Pablik, E; Collet-Fenetrier, B; Peyrin-Biroulet, L; Alfaro, I; Panés, J; Moayyedi, P; Reinisch, W

    2016-09-01

    The Lennard-Jones criteria are considered the gold standard for diagnosing Crohn's disease (CD) and include the items granuloma, macroscopic discontinuity, transmural inflammation, fibrosis, lymphoid aggregates and discontinuous inflammation on histology. The criteria have never been subjected to a formal validation process. To develop a validated and improved diagnostic index based on the items of Lennard-Jones criteria. Included were 328 adult patients with long-standing CD (median disease duration 10 years) from three centres and classified as 'established', 'probable' or 'non-CD' by Lennard-Jones criteria at time of diagnosis. Controls were patients with ulcerative colitis (n = 170). The performance of each of the six diagnostic items of Lennard-Jones criteria was modelled by logistic regression and a new index based on stepwise backward selection and cut-offs was developed. The diagnostic value of the new index was analysed by comparing sensitivity, specificity and accuracy vs. Lennard-Jones criteria. By Lennard-Jones criteria 49% (n = 162) of CD patients would have been diagnosed as 'non-CD' at time of diagnosis (sensitivity/specificity/accuracy, 'established' CD: 0.34/0.99/0.67; 'probable' CD: 0.51/0.95/0.73). A new index was derived from granuloma, fibrosis, transmural inflammation and macroscopic discontinuity, but excluded lymphoid aggregates and discontinuous inflammation on histology. Our index provided improved diagnostic accuracy for 'established' and 'probable' CD (sensitivity/specificity/accuracy, 'established' CD: 0.45/1/0.72; 'probable' CD: 0.8/0.85/0.82), including the subgroup isolated colonic CD ('probable' CD, new index: 0.73/0.85/0.79; Lennard-Jones criteria: 0.43/0.95/0.69). We developed an index based on items of Lennard-Jones criteria providing improved diagnostic accuracy for the differential diagnosis between CD and UC. © 2016 John Wiley & Sons Ltd.

  20. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections.

    Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana

    2015-12-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  1. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  2. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  3. When less is more: 'slicing' sequencing data improves read decoding accuracy and de novo assembly quality.

    Lonardi, Stefano; Mirebrahim, Hamid; Wanamaker, Steve; Alpert, Matthew; Ciardo, Gianfranco; Duma, Denisa; Close, Timothy J

    2015-09-15

    As the invention of DNA sequencing in the 70s, computational biologists have had to deal with the problem of de novo genome assembly with limited (or insufficient) depth of sequencing. In this work, we investigate the opposite problem, that is, the challenge of dealing with excessive depth of sequencing. We explore the effect of ultra-deep sequencing data in two domains: (i) the problem of decoding reads to bacterial artificial chromosome (BAC) clones (in the context of the combinatorial pooling design we have recently proposed), and (ii) the problem of de novo assembly of BAC clones. Using real ultra-deep sequencing data, we show that when the depth of sequencing increases over a certain threshold, sequencing errors make these two problems harder and harder (instead of easier, as one would expect with error-free data), and as a consequence the quality of the solution degrades with more and more data. For the first problem, we propose an effective solution based on 'divide and conquer': we 'slice' a large dataset into smaller samples of optimal size, decode each slice independently, and then merge the results. Experimental results on over 15 000 barley BACs and over 4000 cowpea BACs demonstrate a significant improvement in the quality of the decoding and the final assembly. For the second problem, we show for the first time that modern de novo assemblers cannot take advantage of ultra-deep sequencing data. Python scripts to process slices and resolve decoding conflicts are available from http://goo.gl/YXgdHT; software Hashfilter can be downloaded from http://goo.gl/MIyZHs stelo@cs.ucr.edu or timothy.close@ucr.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. A phoswich detector design for improved spatial sampling in PET

    Thiessen, Jonathan D.; Koschan, Merry A.; Melcher, Charles L.; Meng, Fang; Schellenberg, Graham; Goertzen, Andrew L.

    2018-02-01

    Block detector designs, utilizing a pixelated scintillator array coupled to a photosensor array in a light-sharing design, are commonly used for positron emission tomography (PET) imaging applications. In practice, the spatial sampling of these designs is limited by the crystal pitch, which must be large enough for individual crystals to be resolved in the detector flood image. Replacing the conventional 2D scintillator array with an array of phoswich elements, each consisting of an optically coupled side-by-side scintillator pair, may improve spatial sampling in one direction of the array without requiring resolving smaller crystal elements. To test the feasibility of this design, a 4 × 4 phoswich array was constructed, with each phoswich element consisting of two optically coupled, 3 . 17 × 1 . 58 × 10mm3 LSO crystals co-doped with cerium and calcium. The amount of calcium doping was varied to create a 'fast' LSO crystal with decay time of 32.9 ns and a 'slow' LSO crystal with decay time of 41.2 ns. Using a Hamamatsu R8900U-00-C12 position-sensitive photomultiplier tube (PS-PMT) and a CAEN V1720 250 MS/s waveform digitizer, we were able to show effective discrimination of the fast and slow LSO crystals in the phoswich array. Although a side-by-side phoswich array is feasible, reflections at the crystal boundary due to a mismatch between the refractive index of the optical adhesive (n = 1 . 5) and LSO (n = 1 . 82) caused it to behave optically as an 8 × 4 array rather than a 4 × 4 array. Direct coupling of each phoswich element to individual photodetector elements may be necessary with the current phoswich array design. Alternatively, in order to implement this phoswich design with a conventional light sharing PET block detector, a high refractive index optical adhesive is necessary to closely match the refractive index of LSO.

  5. On the improvement of blood sample collection at clinical laboratories.

    Grasas, Alex; Ramalhinho, Helena; Pessoa, Luciana S; Resende, Mauricio G C; Caballé, Imma; Barba, Nuria

    2014-01-09

    Blood samples are usually collected daily from different collection points, such hospitals and health centers, and transported to a core laboratory for testing. This paper presents a project to improve the collection routes of two of the largest clinical laboratories in Spain. These routes must be designed in a cost-efficient manner while satisfying two important constraints: (i) two-hour time windows between collection and delivery, and (ii) vehicle capacity. A heuristic method based on a genetic algorithm has been designed to solve the problem of blood sample collection. The user enters the following information for each collection point: postal address, average collecting time, and average demand (in thermal containers). After implementing the algorithm using C programming, this is run and, in few seconds, it obtains optimal (or near-optimal) collection routes that specify the collection sequence for each vehicle. Different scenarios using various types of vehicles have been considered. Unless new collection points are added or problem parameters are changed substantially, routes need to be designed only once. The two laboratories in this study previously planned routes manually for 43 and 74 collection points, respectively. These routes were covered by an external carrier company. With the implementation of this algorithm, the number of routes could be reduced from ten to seven in one laboratory and from twelve to nine in the other, which represents significant annual savings in transportation costs. The algorithm presented can be easily implemented in other laboratories that face this type of problem, and it is particularly interesting and useful as the number of collection points increases. The method designs blood collection routes with reduced costs that meet the time and capacity constraints of the problem.

  6. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety.

  7. Does experience in hysteroscopy improve accuracy and inter-observer agreement in the management of abnormal uterine bleeding?

    Bourdel, Nicolas; Modaffari, Paola; Tognazza, Enrica; Pertile, Riccardo; Chauvet, Pauline; Botchorishivili, Revaz; Savary, Dennis; Pouly, Jean Luc; Rabischong, Benoit; Canis, Michel

    2016-12-01

    Hysteroscopic reliability may be influenced by the experience of the operator and by a lack of morphological diagnostic criteria for endometrial malignant pathologies. The aim of this study was to evaluate the diagnostic accuracy and the inter-observer agreement (IOA) in the management of abnormal uterine bleeding (AUB) among different experienced gynecologists. Each gynecologist, without any other clinical information, was asked to evaluate the anonymous video recordings of 51 consecutive patients who underwent hysteroscopy and endometrial resection for AUB. Experts (>500 hysteroscopies), seniors (20-499 procedures) and junior (≤19 procedures) gynecologists were asked to judge endometrial macroscopic appearance (benign, suspicious or frankly malignant). They also had to propose the histological diagnosis (atrophic or proliferative endometrium; simple, glandulocystic or atypical endometrial hyperplasia and endometrial carcinoma). Observers were free to indicate whether the quality of recordings were not good enough for adequate assessment. IOA (k coefficient), sensitivity, specificity, predictive value and the likelihood ratio were calculated. Five expert, five senior and six junior gynecologists were involved in the study. Considering endometrial cancer and endometrial atypical hyperplasia, sensitivity and specificity were respectively 55.5 % and 84.5 % for juniors, 66.6 % and 81.2 % for seniors and 86.6 % and 87.3 % for experts. Concerning endometrial macroscopic appearance, IOA was poor for juniors (k = 0.10) and fair for seniors and experts (k = 0.23 and 0.22, respectively). IOA was poor for juniors and experts (k = 0.18 and 0.20, respectively) and fair for seniors (k = 0.30) in predicting the histological diagnosis. Sensitivity improves with the observer's experience, but inter-observer agreement and reproducibility of hysteroscopy for endometrial malignancies are not satisfying no matter the level of expertise. Therefore, an accurate and

  8. Analysis of Correlation in MEMS Gyroscope Array and its Influence on Accuracy Improvement for the Combined Angular Rate Signal

    Liang Xue

    2018-01-01

    Full Text Available Obtaining a correlation factor is a prerequisite for fusing multiple outputs of a mircoelectromechanical system (MEMS gyroscope array and evaluating accuracy improvement. In this paper, a mathematical statistics method is established to analyze and obtain the practical correlation factor of a MEMS gyroscope array, which solves the problem of determining the Kalman filter (KF covariance matrix Q and fusing the multiple gyroscope signals. The working principle and mathematical model of the sensor array fusion is briefly described, and then an optimal estimate of input rate signal is achieved by using of a steady-state KF gain in an off-line estimation approach. Both theoretical analysis and simulation show that the negative correlation factor has a favorable influence on accuracy improvement. Additionally, a four-gyro array system composed of four discrete individual gyroscopes was developed to test the correlation factor and its influence on KF accuracy improvement. The result showed that correlation factors have both positive and negative values; in particular, there exist differences for correlation factor between the different units in the array. The test results also indicated that the Angular Random Walk (ARW of 1.57°/h0.5 and bias drift of 224.2°/h for a single gyroscope were reduced to 0.33°/h0.5 and 47.8°/h with some negative correlation factors existing in the gyroscope array, making a noise reduction factor of about 4.7, which is higher than that of a uncorrelated four-gyro array. The overall accuracy of the combined angular rate signal can be further improved if the negative correlation factors in the gyroscope array become larger.

  9. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spec...

  10. Theoretical study on new bias factor methods to effectively use critical experiments for improvement of prediction accuracy of neutronic characteristics

    Kugo, Teruhiko; Mori, Takamasa; Takeda, Toshikazu

    2007-01-01

    Extended bias factor methods are proposed with two new concepts, the LC method and the PE method, in order to effectively use critical experiments and to enhance the applicability of the bias factor method for the improvement of the prediction accuracy of neutronic characteristics of a target core. Both methods utilize a number of critical experimental results and produce a semifictitious experimental value with them. The LC and PE methods define the semifictitious experimental values by a linear combination of experimental values and the product of exponentiated experimental values, respectively, and the corresponding semifictitious calculation values by those of calculation values. A bias factor is defined by the ratio of the semifictitious experimental value to the semifictitious calculation value in both methods. We formulate how to determine weights for the LC method and exponents for the PE method in order to minimize the variance of the design prediction value obtained by multiplying the design calculation value by the bias factor. From a theoretical comparison of these new methods with the conventional method which utilizes a single experimental result and the generalized bias factor method which was previously proposed to utilize a number of experimental results, it is concluded that the PE method is the most useful method for improving the prediction accuracy. The main advantages of the PE method are summarized as follows. The prediction accuracy is necessarily improved compared with the design calculation value even when experimental results include large experimental errors. This is a special feature that the other methods do not have. The prediction accuracy is most effectively improved by utilizing all the experimental results. From these facts, it can be said that the PE method effectively utilizes all the experimental results and has a possibility to make a full-scale-mockup experiment unnecessary with the use of existing and future benchmark

  11. Improving mass measurement accuracy in mass spectrometry based proteomics by combining open source tools for chromatographic alignment and internal calibration.

    Palmblad, Magnus; van der Burgt, Yuri E M; Dalebout, Hans; Derks, Rico J E; Schoenmaker, Bart; Deelder, André M

    2009-05-02

    Accurate mass determination enhances peptide identification in mass spectrometry based proteomics. We here describe the combination of two previously published open source software tools to improve mass measurement accuracy in Fourier transform ion cyclotron resonance mass spectrometry (FTICRMS). The first program, msalign, aligns one MS/MS dataset with one FTICRMS dataset. The second software, recal2, uses peptides identified from the MS/MS data for automated internal calibration of the FTICR spectra, resulting in sub-ppm mass measurement errors.

  12. Sampling strategies to improve passive optical remote sensing of river bathymetry

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  13. A Study on Accuracy Improvement of Dual Micro Patterns Using Magnetic Abrasive Deburring

    Jin, Dong-Hyun; Kwak, Jae-Seob [Pukyong Nat’l Univ., Busan (Korea, Republic of)

    2016-11-15

    In recent times, the requirement of a micro pattern on the surface of products has been increasing, and high precision in the fabrication of the pattern is required. Hence, in this study, dual micro patterns were fabricated on a cylindrical workpiece, and deburring was performed by magnetic abrasive deburring (MAD) process. A prediction model was developed, and the MAD process was optimized using the response surface method. When the predicted values were compared with the experimental results, the average prediction error was found to be approximately 7%. Experimental verification shows fabrication of high accuracy dual micro pattern and reliability of prediction model.

  14. Concepts for improving the accuracy of gas balance measurement at ASDEX Upgrade

    Härtl, T., E-mail: thomas.haertl@ipp.mpg.de; Rohde, V.; Mertens, V.

    2013-10-15

    The ITER fusion reactor which is under construction will use a deuterium–tritium gas mixture for operation. A fraction of this fusion fuel remains inside of the machine due to various mechanisms. The evaluation of this retention in present fusion experiments is of crucial importance to estimate the expected tritium inventory in ITER which shall be limited due to safety considerations. At ASDEX Upgrade (AUG) sufficiently time-resolved measurements should take place to extrapolate from current 10 s discharges to the at least intended 400 s ones of ITER. To achieve this, a new measurement system has been designed that enables accuracy of better than one per cent.

  15. An integrated sampling and analysis approach for improved biodiversity monitoring

    DeWan, Amielle A.; Zipkin, Elise F.

    2010-01-01

    Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.

  16. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  17. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks.

    Hawkins, S C; Osborne, A; Schofield, S J; Pournaras, D J; Chester, J F

    2012-01-01

    Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.

  18. Improvement of registration accuracy of a handheld augmented reality system for urban landscape simulation

    Tomohiro Fukuda

    2014-12-01

    Full Text Available The need for visual landscape assessment in large-scale projects for the evaluation of the effects of a particular project on the surrounding landscape has grown in recent years. Augmented reality (AR has been considered for use as a landscape simulation system in which a landscape assessment object created by 3D models is included in the present surroundings. With the use of this system, the time and the cost needed to perform a 3DCG modeling of present surroundings, which is a major issue in virtual reality, are drastically reduced. This research presents the development of a 3D map-oriented handheld AR system that achieves geometric consistency using a 3D map to obtain position data instead of GPS, which has low position information accuracy, particularly in urban areas. The new system also features a gyroscope sensor to obtain posture data and a video camera to capture live video of the present surroundings. All these components are mounted in a smartphone and can be used for urban landscape assessment. Registration accuracy is evaluated to simulate an urban landscape from a short- to a long-range scale. The latter involves a distance of approximately 2000 m. The developed AR system enables users to simulate a landscape from multiple and long-distance viewpoints simultaneously and to walk around the viewpoint fields using only a smartphone. This result is the tolerance level of landscape assessment. In conclusion, the proposed method is evaluated as feasible and effective.

  19. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  20. Screen for Disordered Eating: Improving the accuracy of eating disorder screening in primary care.

    Maguen, Shira; Hebenstreit, Claire; Li, Yongmei; Dinh, Julie V; Donalson, Rosemary; Dalton, Sarah; Rubin, Emma; Masheb, Robin

    To develop a primary care eating disorder screen with greater accuracy and greater potential for generalizability, compared to existing screens. Cross-sectional survey to assess discriminative accuracy of a new screen, Screen for Disordered Eating (SDE), compared to Eating Disorders Screen for Primary Care (EDS-PC) and SCOFF screener, using prevalence rates of Binge Eating Disorder (BED), Bulimia Nervosa (BN), Anorexia Nervosa (AN), and Any Eating Disorder (AED), as measured by the Eating Disorder Examination Questionnaire (EDE-Q). The SDE correctly classified 87.2% (CI: 74.3%-95.2%) of BED cases, all cases of BN and AN, and 90.5% (CI: 80.4%-96.4%) of AED cases. Sensitivity estimates were higher than the SCOFF, which correctly identified 69.6% (CI: 54.2%-82.3%) of BED, 77.8% (CI: 40.0%-97.2%) of BN, 37.5% (CI: 8.52%-75.5%) of AN, and 66.1% (CI: 53%-77.7%) of AED. While the EDS-PC had slightly higher sensitivity than the SDE, the SDE had better specificity. The SDE outperformed the SCOFF in classifying true cases, the EDS-PC in classifying true non-cases, and the EDS-PC in distinguishing cases from non-cases. The SDE is the first screen, inclusive of BED, valid for detecting eating disorders in primary care. Findings have broad implications to address eating disorder screening in primary care settings. Published by Elsevier Inc.

  1. Improving plasma shaping accuracy through consolidation of control model maintenance, diagnostic calibration, and hardware change control

    Baggest, D.S.; Rothweil, D.A.; Pang, S.

    1995-12-01

    With the advent of more sophisticated techniques for control of tokamak plasmas comes the requirement for increasingly more accurate models of plasma processes and tokamak systems. Development of accurate models for DIII-D power systems, vessel, and poloidal coils is already complete, while work continues in development of general plasma response modeling techniques. Increased accuracy in estimates of parameters to be controlled is also required. It is important to ensure that errors in supporting systems such as diagnostic and command circuits do not limit the accuracy of plasma parameter estimates or inhibit the ability to derive accurate plasma/tokamak system models. To address this issue, we have developed more formal power systems change control and power system/magnetic diagnostics calibration procedures. This paper discusses our approach to consolidating the tasks in these closely related areas. This includes, for example, defining criteria for when diagnostics should be re-calibrated along with required calibration tolerances, and implementing methods for tracking power systems hardware modifications and the resultant changes to control models

  2. PCA3 and PCA3-Based Nomograms Improve Diagnostic Accuracy in Patients Undergoing First Prostate Biopsy

    Virginie Vlaeminck-Guillem

    2013-08-01

    Full Text Available While now recognized as an aid to predict repeat prostate biopsy outcome, the urinary PCA3 (prostate cancer gene 3 test has also been recently advocated to predict initial biopsy results. The objective is to evaluate the performance of the PCA3 test in predicting results of initial prostate biopsies and to determine whether its incorporation into specific nomograms reinforces its diagnostic value. A prospective study included 601 consecutive patients addressed for initial prostate biopsy. The PCA3 test was performed before ≥12-core initial prostate biopsy, along with standard risk factor assessment. Diagnostic performance of the PCA3 test was evaluated. The three available nomograms (Hansen’s and Chun’s nomograms, as well as the updated Prostate Cancer Prevention Trial risk calculator; PCPT were applied to the cohort, and their predictive accuracies were assessed in terms of biopsy outcome: the presence of any prostate cancer (PCa and high-grade prostate cancer (HGPCa. The PCA3 score provided significant predictive accuracy. While the PCPT risk calculator appeared less accurate; both Chun’s and Hansen’s nomograms provided good calibration and high net benefit on decision curve analyses. When applying nomogram-derived PCa probability thresholds ≤30%, ≤6% of HGPCa would have been missed, while avoiding up to 48% of unnecessary biopsies. The urinary PCA3 test and PCA3-incorporating nomograms can be considered as reliable tools to aid in the initial biopsy decision.

  3. Accuracy of recommended sampling and assay methods for the determination of plasma-free and urinary fractionated metanephrines in the diagnosis of pheochromocytoma and paraganglioma: a systematic review.

    Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme

    2017-06-01

    To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p sampling compared with 24-h urine, 95 vs. 90% (p sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.

  4. Improvement of Dimensional Accuracy of 3-D Printed Parts using an Additive/Subtractive Based Hybrid Prototyping Approach

    Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.

    2017-11-01

    At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).

  5. Improving the image of student-recruited samples : a commentary

    Demerouti, E.; Rispens, S.

    2014-01-01

    This commentary argues that the quality and usefulness of student-recruited data can be evaluated by examining the external validity and generalization issues related to this sampling method. Therefore, we discuss how the sampling methods of student- and non-student-recruited samples can enhance or

  6. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.

    2008-10-01

    Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ~24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ~4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent, and

  7. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    Kim, E; Bowsher, J; Thomas, A S; Sakhalkar, H; Dewhirst, M; Oldham, M

    2008-01-01

    Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ∼24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ∼4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent

  8. Correlation Matrix Renormalization Theory: Improving Accuracy with Two-Electron Density-Matrix Sum Rules.

    Liu, C; Liu, J; Yao, Y X; Wu, P; Wang, C Z; Ho, K M

    2016-10-11

    We recently proposed the correlation matrix renormalization (CMR) theory to treat the electronic correlation effects [Phys. Rev. B 2014, 89, 045131 and Sci. Rep. 2015, 5, 13478] in ground state total energy calculations of molecular systems using the Gutzwiller variational wave function (GWF). By adopting a number of approximations, the computational effort of the CMR can be reduced to a level similar to Hartree-Fock calculations. This paper reports our recent progress in minimizing the error originating from some of these approximations. We introduce a novel sum-rule correction to obtain a more accurate description of the intersite electron correlation effects in total energy calculations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  9. Improvement in the accuracy of polymer gel dosimeters using scintillating fibers

    Tremblay, Nicolas M; Hubert-Tremblay, Vincent; Bujold, Rachel; Beaulieu, Luc; Lepage, Martin

    2010-01-01

    We propose a novel method for the absolute calibration of polyacrylamide gel (PAG) dosimeters with one or more reference scintillating fiber dosimeters inserted inside the gel. Four calibrated scintillating fibers were inserted into a cylindrical glass container filled with a PAG dosimeter irradiated with a wedge filtered 6 MV photon beam. Calibration curves using small glass vials containing the same gel as the cylindrical containers were used to obtain a first calibration curve. This calibration curve was then adjusted with the dose measured with one of the scintillating fibers in a low gradient part of the field using different approaches. Among these, it was found that a translation of the gel calibration curve yielded the highest accuracy with PAG dosimeters.

  10. Using expected sequence features to improve basecalling accuracy of amplicon pyrosequencing data

    Rask, Thomas Salhøj; Petersen, Bent; Chen, Donald S.

    2016-01-01

    . The new basecalling method described here, named Multipass, implements a probabilistic framework for working with the raw flowgrams obtained by pyrosequencing. For each sequence variant Multipass calculates the likelihood and nucleotide sequence of several most likely sequences given the flowgram data....... This probabilistic approach enables integration of basecalling into a larger model where other parameters can be incorporated, such as the likelihood for observing a full-length open reading frame at the targeted region. We apply the method to 454 amplicon pyrosequencing data obtained from a malaria virulence gene...... family, where Multipass generates 20 % more error-free sequences than current state of the art methods, and provides sequence characteristics that allow generation of a set of high confidence error-free sequences. This novel method can be used to increase accuracy of existing and future amplicon...

  11. Visualization of the diaphragm muscle with ultrasound improves diagnostic accuracy of phrenic nerve conduction studies.

    Johnson, Nicholas E; Utz, Michael; Patrick, Erica; Rheinwald, Nicole; Downs, Marlene; Dilek, Nuran; Dogra, Vikram; Logigian, Eric L

    2014-05-01

    Evaluation of phrenic neuropathy (PN) with phrenic nerve conduction studies (PNCS) is associated with false negatives. Visualization of diaphragmatic muscle twitch with diaphragm ultrasound (DUS) when performing PNCS may help to solve this problem. We performed bilateral, simultaneous DUS-PNCS in 10 healthy adults and 12 patients with PN. The amplitude of the diaphragm compound muscle action potential (CMAP) (on PNCS) and twitch (on DUS) was calculated. Control subjects had phrenic CMAP (on PCNS). In the 12 patients with PN, 12 phrenic neuropathies were detected. Three of these patients had either significant side-to-side asymmetry or absolute reduction in diaphragm movement that was not detected with PNCS. There were no cases in which the PNCS showed an abnormality but the DUS did not. The addition of DUS to PNCS enhances diagnostic accuracy in PN. Copyright © 2013 Wiley Periodicals, Inc.

  12. Improvements in and relating to the incubation of samples

    Bagshawe, K.D.

    1978-01-01

    Apparatus is described for incubating a plurality of biological samples and particularly as part of an analysis, e.g. radioimmunoassay or enzyme assay, of the samples. The apparatus is comprised of an incubation station with a plurality of containers to which samples together with diluent and reagents are supplied. The containers are arranged in rows in two side-by-side columns and are circulated sequentially. Sample removal means is provided either at a fixed location or at a movable point relative to the incubator. Circulation of the containers and the length of sample incubation time is controlled by a computer. The incubation station may include a plurality of sections with the columns in communication so that rows of samples can be moved from the column of one section to the column of an adjacent section, to provide alternative paths for circulation of the samples. (author)

  13. The Application of Digital Pathology to Improve Accuracy in Glomerular Enumeration in Renal Biopsies.

    Avi Z Rosenberg

    Full Text Available In renal biopsy reporting, quantitative measurements, such as glomerular number and percentage of globally sclerotic glomeruli, is central to diagnostic accuracy and prognosis. The aim of this study is to determine the number of glomeruli and percent globally sclerotic in renal biopsies by means of registration of serial tissue sections and manual enumeration, compared to the numbers in pathology reports from routine light microscopic assessment.We reviewed 277 biopsies from the Nephrotic Syndrome Study Network (NEPTUNE digital pathology repository, enumerating 9,379 glomeruli by means of whole slide imaging. Glomerular number and the percentage of globally sclerotic glomeruli are values routinely recorded in the official renal biopsy pathology report from the 25 participating centers. Two general trends in reporting were noted: total number per biopsy or average number per level/section. Both of these approaches were assessed for their accuracy in comparison to the analogous numbers of annotated glomeruli on WSI.The number of glomeruli annotated was consistently higher than those reported (p<0.001; this difference was proportional to the number of glomeruli. In contrast, percent globally sclerotic were similar when calculated on total glomeruli, but greater in FSGS when calculated on average number of glomeruli (p<0.01. The difference in percent globally sclerotic between annotated and those recorded in pathology reports was significant when global sclerosis is greater than 40%.Although glass slides were not available for direct comparison to whole slide image annotation, this study indicates that routine manual light microscopy assessment of number of glomeruli is inaccurate, and the magnitude of this error is proportional to the total number of glomeruli.

  14. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  15. Inclusion of Population-specific Reference Panel from India to the 1000 Genomes Phase 3 Panel Improves Imputation Accuracy.

    Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R

    2017-07-27

    Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.

  16. A simple and efficient methodology to improve geometric accuracy in gamma knife radiation surgery: implementation in multiple brain metastases.

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, "average" image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  18. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-01-01

    Background Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. Methods We collected two cervical specimens from 396 women, ...

  19. Improved Accuracy of Myocardial Perfusion SPECT for the Detection of Coronary Artery Disease by Utilizing a Support Vector Machines Algorithm

    Arsanjani, Reza; Xu, Yuan; Dey, Damini; Fish, Matthews; Dorbala, Sharmila; Hayes, Sean; Berman, Daniel; Germano, Guido; Slomka, Piotr

    2012-01-01

    We aimed to improve the diagnostic accuracy of automatic myocardial perfusion SPECT (MPS) interpretation analysis for prediction of coronary artery disease (CAD) by integrating several quantitative perfusion and functional variables for non-corrected (NC) data by support vector machines (SVM), a computer method for machine learning. Methods 957 rest/stress 99mtechnetium gated MPS NC studies from 623 consecutive patients with correlating invasive coronary angiography and 334 with low likelihood of CAD (LLK < 5% ) were assessed. Patients with stenosis ≥ 50% in left main or ≥ 70% in all other vessels were considered abnormal. Total perfusion deficit (TPD) was computed automatically. In addition, ischemic changes (ISCH) and ejection fraction changes (EFC) between stress and rest were derived by quantitative software. The SVM was trained using a group of 125 pts (25 LLK, 25 0-, 25 1-, 25 2- and 25 3-vessel CAD) using above quantitative variables and second order polynomial fitting. The remaining patients (N = 832) were categorized based on probability estimates, with CAD defined as (probability estimate ≥ 0.50). The diagnostic accuracy of SVM was also compared to visual segmental scoring by two experienced readers. Results Sensitivity of SVM (84%) was significantly better than ISCH (75%, p < 0.05) and EFC (31%, p < 0.05). Specificity of SVM (88%) was significantly better than that of TPD (78%, p < 0.05) and EFC (77%, p < 0.05). Diagnostic accuracy of SVM (86%) was significantly better than TPD (81%), ISCH (81%), or EFC (46%) (p < 0.05 for all). The Receiver-operator-characteristic area-under-the-curve (ROC-AUC) for SVM (0.92) was significantly better than TPD (0.90), ISCH (0.87), and EFC (0.60) (p < 0.001 for all). Diagnostic accuracy of SVM was comparable to the overall accuracy of both visual readers (85% vs. 84%, p < 0.05). ROC-AUC for SVM (0.92) was significantly better than that of both visual readers (0.87 and 0.88, p < 0.03). Conclusion Computational

  20. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    Ntaios, George; Papavasileiou, Vasileios; Faouzi, Mohamed; Vanacker, Peter; Wintermark, Max; Michel, Patrik

    2014-10-01

    The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model of the ASTRAL score. If a specific imaging covariate remained an independent predictor of three-month modified Rankin score>2, the area-under-the-curve (AUC) of this new model was calculated and compared with ASTRAL score's AUC. We also performed similar logistic regression analyses in arbitrarily chosen patient subgroups. When added to the ASTRAL score, the following covariates on admission computed tomography/magnetic resonance imaging-based multimodal imaging were not significant predictors of outcome: any stroke-related acute lesion, any nonstroke-related lesions, chronic/subacute stroke, leukoaraiosis, significant arterial pathology in ischemic territory on computed tomography angiography/magnetic resonance angiography/Doppler, significant intracranial arterial pathology in ischemic territory, and focal hypoperfusion on perfusion-computed tomography. The Alberta Stroke Program Early CT score on plain imaging and any significant extracranial arterial pathology on computed tomography angiography/magnetic resonance angiography/Doppler were independent predictors of outcome (odds ratio: 0·93, 95% CI: 0·87-0·99 and odds ratio: 1·49, 95% CI: 1·08-2·05, respectively) but did not increase ASTRAL score's AUC (0·849 vs. 0·850, and 0·8563 vs. 0·8564, respectively). In exploratory analyses in subgroups of different prognosis, age or stroke severity, no covariate was found to increase ASTRAL score's AUC, either. The addition of information derived from multimodal imaging does not increase ASTRAL score

  1. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  2. Intellijoint HIP®: a 3D mini-optical navigation tool for improving intraoperative accuracy during total hip arthroplasty

    Paprosky WG

    2016-11-01

    Full Text Available Wayne G Paprosky,1,2 Jeffrey M Muir3 1Department of Orthopedics, Section of Adult Joint Reconstruction, Department of Orthopedics, Rush University Medical Center, Rush–Presbyterian–St Luke’s Medical Center, Chicago, 2Central DuPage Hospital, Winfield, IL, USA; 3Intellijoint Surgical, Inc, Waterloo, ON, Canada Abstract: Total hip arthroplasty is an increasingly common procedure used to address degenerative changes in the hip joint due to osteoarthritis. Although generally associated with good results, among the challenges associated with hip arthroplasty are accurate measurement of biomechanical parameters such as leg length, offset, and cup position, discrepancies of which can lead to significant long-term consequences such as pain, instability, neurological deficits, dislocation, and revision surgery, as well as patient dissatisfaction and, increasingly, litigation. Current methods of managing these parameters are limited, with manual methods such as outriggers or calipers being used to monitor leg length; however, these are susceptible to small intraoperative changes in patient position and are therefore inaccurate. Computer-assisted navigation, while offering improved accuracy, is expensive and cumbersome, in addition to adding significantly to procedural time. To address the technological gap in hip arthroplasty, a new intraoperative navigation tool (Intellijoint HIP® has been developed. This innovative, 3D mini-optical navigation tool provides real-time, intraoperative data on leg length, offset, and cup position and allows for improved accuracy and precision in component selection and alignment. Benchtop and simulated clinical use testing have demonstrated excellent accuracy, with the navigation tool able to measure leg length and offset to within <1 mm and cup position to within <1° in both anteversion and inclination. This study describes the indications, procedural technique, and early accuracy results of the Intellijoint HIP

  3. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  4. Colorimetric Measurements of Amylase Activity: Improved Accuracy and Efficiency with a Smartphone

    Dangkulwanich, Manchuta; Kongnithigarn, Kaness; Aurnoppakhun, Nattapat

    2018-01-01

    Routinely used in quantitative determination of various analytes, UV-vis spectroscopy is commonly taught in undergraduate chemistry laboratory courses. Because the technique measures the absorbance of light through the samples, losses from reflection and scattering by large molecules interfere with the measurement. To emphasize the importance of…

  5. Improving the precision and accuracy of Mont Carlo simulation in positron emission tomography

    Picard, Y.; Thompson, C.J.; Marrett, S.

    1992-01-01

    This paper reports that most of the gamma-rays generated following positron annihilation in subjects undergoing PET studied never reach the detectors of a positron imaging device. Similarly, simulations of PET systems waste considerable time generating events which will never be detected. Simulation efficiency (in terms of detected photon pairs per CPU second) can be improved by generating only rays whose angular distribution gives them a reasonable chance of ultimate detection. For many events in which the original rays are directed generally towards the detectors, the precision of the final simulation can be improved by recycling them without reseeding the random number generator, giving them a second or greater chance of being detected. For simulation programs which cascade the simulation process into source, collimator, and detection phases, recycling can improve the precision of the simulation without requiring larger files of events from the source distribution simulation phase

  6. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  7. Diagnostic accuracy of serological diagnosis of hepatitis C and B using dried blood spot samples (DBS): two systematic reviews and meta-analyses.

    Lange, Berit; Cohn, Jennifer; Roberts, Teri; Camp, Johannes; Chauffour, Jeanne; Gummadi, Nina; Ishizaki, Azumi; Nagarathnam, Anupriya; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Easterbrook, Philippa; Denkinger, Claudia M

    2017-11-01

    Dried blood spots (DBS) are a convenient tool to enable diagnostic testing for viral diseases due to transport, handling and logistical advantages over conventional venous blood sampling. A better understanding of the performance of serological testing for hepatitis C (HCV) and hepatitis B virus (HBV) from DBS is important to enable more widespread use of this sampling approach in resource limited settings, and to inform the 2017 World Health Organization (WHO) guidance on testing for HBV/HCV. We conducted two systematic reviews and meta-analyses on the diagnostic accuracy of HCV antibody (HCV-Ab) and HBV surface antigen (HBsAg) from DBS samples compared to venous blood samples. MEDLINE, EMBASE, Global Health and Cochrane library were searched for studies that assessed diagnostic accuracy with DBS and agreement between DBS and venous sampling. Heterogeneity of results was assessed and where possible a pooled analysis of sensitivity and specificity was performed using a bivariate analysis with maximum likelihood estimate and 95% confidence intervals (95%CI). We conducted a narrative review on the impact of varying storage conditions or limits of detection in subsets of samples. The QUADAS-2 tool was used to assess risk of bias. For the diagnostic accuracy of HBsAg from DBS compared to venous blood, 19 studies were included in a quantitative meta-analysis, and 23 in a narrative review. Pooled sensitivity and specificity were 98% (95%CI:95%-99%) and 100% (95%CI:99-100%), respectively. For the diagnostic accuracy of HCV-Ab from DBS, 19 studies were included in a pooled quantitative meta-analysis, and 23 studies were included in a narrative review. Pooled estimates of sensitivity and specificity were 98% (CI95%:95-99) and 99% (CI95%:98-100), respectively. Overall quality of studies and heterogeneity were rated as moderate in both systematic reviews. HCV-Ab and HBsAg testing using DBS compared to venous blood sampling was associated with excellent diagnostic accuracy

  8. Drag coefficient accuracy improvement by means of particle image velocimetry for a transonic NACA0012 airfoil

    Ragni, D; Van Oudheusden, B W; Scarano, F

    2011-01-01

    A method to improve the reliability of the drag coefficient computation by means of particle image velocimetry measurements is made using experimental data acquired on a NACA0012 airfoil tested in the transonic regime, using the combination of a variable pulse separation with a new high-order Poisson spectral pressure reconstruction algorithm. (technical design note)

  9. Improving Accuracy and Relevance of Race/Ethnicity Data: Results of a Statewide Collaboration in Hawaii.

    Pellegrin, Karen L; Miyamura, Jill B; Ma, Carolyn; Taniguchi, Ronald

    2016-01-01

    Current race/ethnicity categories established by the U.S. Office of Management and Budget are neither reliable nor valid for understanding health disparities or for tracking improvements in this area. In Hawaii, statewide hospitals have collaborated to collect race/ethnicity data using a standardized method consistent with recommended practices that overcome the problems with the federal categories. The purpose of this observational study was to determine the impact of this collaboration on key measures of race/ethnicity documentation. After this collaborative effort, the number of standardized categories available across hospitals increased from 6 to 34, and the percent of inpatients with documented race/ethnicity increased from 88 to 96%. This improved standardized methodology is now the foundation for tracking population health indicators statewide and focusing quality improvement efforts. The approach used in Hawaii can serve as a model for other states and regions. Ultimately, the ability to standardize data collection methodology across states and regions will be needed to track improvements nationally.

  10. Gene network inherent in genomic big data improves the accuracy of prognostic prediction for cancer patients.

    Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock

    2017-09-29

    Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients.

  11. Improving virtual screening predictive accuracy of Human kallikrein 5 inhibitors using machine learning models.

    Fang, Xingang; Bagui, Sikha; Bagui, Subhash

    2017-08-01

    The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Improving accuracy of ET measurement of LISS nozzle to calandria tube clearance

    Craig, S.T.; Krause, T.W.; Schankula, J.J.

    2006-01-01

    The AECL Fuel Channel Inspection System (AFCIS) has been used in an in-reactor field trial to successfully measure the clearance between Liquid Injection Shutdown System (LISS) nozzles and calandria tubes. Each measurement over the full length of a channel added only 15 minutes to the on-channel inspection time. No changes were required to the inspection heads. The only equipment changes made were the addition of a Remote Field Eddy Current (RFEC) module to the eddy current instrument, and minor wiring changes, at the instrument, to achieve a RFEC configuration. With the experience gained from the field trial, factors potentially limiting accuracy were identified. These, and other factors, were investigated and are discussed herein. The RFEC probe is delivered inside the pressure tube. Magnetic fields from the RFEC probe extend through the conducting walls of the pressure tube and calandria tube to interact with the LISS nozzle. Data acquired during the field trial showed the LISS nozzle signal is distinct and the signal-to-noise ratio is very favourable. Nevertheless, comparison of the RFEC measurements to a visual examination, made during the same outage, had the RFEC method underestimating the clearance by 2.5 mm on average. By way of laboratory tests, the following factors were investigated as potential sources of error: resistivity and geometry of LISS nozzle reference/calibration pieces, pressure-tube wall thickness, diameter and resistivity variations, pressure-tube to calandria-tube gap, and radial offsets of the probe within the pressure-tube. The sensitivity to these various noise sources was established. A model, based on fundamental electromagnetic principles, was developed and was used to normalize the effects of LISS nozzle conductivity and geometry. This enabled compensation for various sources of error, and made it possible to produce a correction factor for the field trial data, reducing the average difference from the visual inspection of LISS

  13. Improvement of local position accuracy of robots for off-line programming

    Borm, Jin Hwan; Choi, Jong Cheon

    1992-01-01

    For the implementation of industrial robots in a CIM environment, it is necessary to be able to position their end-effectors to an abstractly defined cartesian position with desired accuracy. In other words, it is necessary to find accurate actuator command values corresponding to given goal positions which are expressed with respect to a certain coordinate frame. If the teaching -by-doing method is used, very accurate actuator command values are obtained from transducer readings. For the case when the goal positions are mathematically expressed, however, the actuator command values for the goal positions must be calculated using robot kinematics. It is, however, well known that the position errors in the order of 10 mm is not unusual while many industrial robots have the repeatability in the order of 0.1mm. In here, the position error is referred to as the difference between the specified goal position and the position where a robot is actually controlled. To reduce the position errors, many researchers proposed calibration methods which are based on robot kinematic identification. However, those methods are quite complex and require an accurate position measuring device. In this paper, a new method which dose not require the accurate kinematic identification, is introduced. In this method, the accurate actuator command values are calculated using the nominal kinematic model which is appropriatly altered based on the available encoder readings of the several reference frames. To demonstrate the simplicity and the effectiveness of the method, computer simulations as well as experimental studies are performed and their results are discussed. (Author)

  14. Determination of optical band gap of powder-form nanomaterials with improved accuracy

    Ahsan, Ragib; Khan, Md. Ziaur Rahman; Basith, Mohammed Abdul

    2017-10-01

    Accurate determination of a material's optical band gap lies in the precise measurement of its absorption coefficients, either from its absorbance via the Beer-Lambert law or diffuse reflectance spectrum via the Kubelka-Munk function. Absorption coefficients of powder-form nanomaterials calculated from absorbance spectrum do not match those calculated from diffuse reflectance spectrum, implying the inaccuracy of the traditional optical band gap measurement method for such samples. We have modified the Beer-Lambert law and the Kubelka-Munk function with proper approximations for powder-form nanomaterials. Applying the modified method for powder-form nanomaterial samples, both absorbance and diffuse reflectance spectra yield exactly the same absorption coefficients and therefore accurately determine the optical band gap.

  15. Improving accuracy of rare variant imputation with a two-step imputation approach

    Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G

    2015-01-01

    not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) ... reference sample genotyped on a dense array and hereafter to the 1000 Genomes reference panel. We show that mean imputation quality, measured by the r(2) using this approach, increases by 28% for variants with a MAF between 1 and 5% as compared with direct imputation to 1000 Genomes reference. Similarly......Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants...

  16. Collaboration between radiological technologists (radiographers) and junior doctors during image interpretation improves the accuracy of diagnostic decisions

    Kelly, B.S.; Rainford, L.A.; Gray, J.; McEntee, M.F.

    2012-01-01

    Rationale and Objectives: In Emergency Departments (ED) junior doctors regularly make diagnostic decisions based on radiographic images. This study investigates whether collaboration between junior doctors and radiographers impacts on diagnostic accuracy. Materials and Methods: Research was carried out in the ED of a university teaching hospital and included 10 pairs of participants. Radiographers and junior doctors were shown 42 wrist radiographs and 40 CT Brains and were asked for their level of confidence of the presence or absence of distal radius fractures or fresh intracranial bleeds respectively using ViewDEX software, first working alone and then in pairs. Receiver Operating Characteristic was used to analyze performance. Results were compared using one-way analysis of variance. Results: The results showed statistically significant improvements in the Area Under the Curve (AUC) of the junior doctors when working with the radiographers for both sets of images (wrist and CT) treated as random readers and cases (p ≤ 0.008 and p ≤ 0.0026 respectively). While the radiographers’ results saw no significant changes, their mean Az values did show an increasing trend when working in collaboration. Conclusion: Improvement in performance of junior doctors following collaboration strongly suggests changes in the potential to improve accuracy of patient diagnosis and therefore patient care. Further training for junior doctors in the interpretation of diagnostic images should also be considered. Decision making of junior doctors was positively impacted on after introducing the opinion of a radiographer. Collaboration exceeds the sum of the parts; the two professions are better together.

  17. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  18. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  19. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  20. TCS: a new multiple sequence alignment reliability measure to estimate alignment accuracy and improve phylogenetic tree reconstruction.

    Chang, Jia-Ming; Di Tommaso, Paolo; Notredame, Cedric

    2014-06-01

    Multiple sequence alignment (MSA) is a key modeling procedure when analyzing biological sequences. Homology and evolutionary modeling are the most common applications of MSAs. Both are known to be sensitive to the underlying MSA accuracy. In this work, we show how this problem can be partly overcome using the transitive consistency score (TCS), an extended version of the T-Coffee scoring scheme. Using this local evaluation function, we show that one can identify the most reliable portions of an MSA, as judged from BAliBASE and PREFAB structure-based reference alignments. We also show how this measure can be used to improve phylogenetic tree reconstruction using both an established simulated data set and a novel empirical yeast data set. For this purpose, we describe a novel lossless alternative to site filtering that involves overweighting the trustworthy columns. Our approach relies on the T-Coffee framework; it uses libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. We compared TCS with Heads-or-Tails, GUIDANCE, Gblocks, and trimAl and found it to lead to significantly better estimates of structural accuracy and more accurate phylogenetic trees. The software is available from www.tcoffee.org/Projects/tcs. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Accuracy Improvement of the Method of Multiple Scales for Nonlinear Vibration Analyses of Continuous Systems with Quadratic and Cubic Nonlinearities

    Akira Abe

    2010-01-01

    and are the driving and natural frequencies, respectively. The application of Galerkin's procedure to the equation of motion yields nonlinear ordinary differential equations with quadratic and cubic nonlinear terms. The steady-state responses are obtained by using the discretization approach of the MMS in which the definition of the detuning parameter, expressing the relationship between the natural frequency and the driving frequency, is changed in an attempt to improve the accuracy of the solutions. The validity of the solutions is discussed by comparing them with solutions of the direct approach of the MMS and the finite difference method.

  2. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    Sukhovoj, A.M.; Khitrov, V.A.; Grigor'ev, E.P.

    2002-01-01

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total γ-ray spectra from the (n,γ) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade γ-decay of heavy nuclei. (author)

  3. Radiotherapy dosimetry audit: three decades of improving standards and accuracy in UK clinical practice and trials

    Clark, Catharine H; Aird, Edwin GA; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia AD; Thomas, Russell AS; Venables, Karen; Thwaites, David I

    2015-01-01

    Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK...

  4. Integrating machine learning and physician knowledge to improve the accuracy of breast biopsy.

    Dutra, I; Nassif, H; Page, D; Shavlik, J; Strigel, R M; Wu, Y; Elezaby, M E; Burnside, E

    2011-01-01

    In this work we show that combining physician rules and machine learned rules may improve the performance of a classifier that predicts whether a breast cancer is missed on percutaneous, image-guided breast core needle biopsy (subsequently referred to as "breast core biopsy"). Specifically, we show how advice in the form of logical rules, derived by a sub-specialty, i.e. fellowship trained breast radiologists (subsequently referred to as "our physicians") can guide the search in an inductive logic programming system, and improve the performance of a learned classifier. Our dataset of 890 consecutive benign breast core biopsy results along with corresponding mammographic findings contains 94 cases that were deemed non-definitive by a multidisciplinary panel of physicians, from which 15 were upgraded to malignant disease at surgery. Our goal is to predict upgrade prospectively and avoid surgery in women who do not have breast cancer. Our results, some of which trended toward significance, show evidence that inductive logic programming may produce better results for this task than traditional propositional algorithms with default parameters. Moreover, we show that adding knowledge from our physicians into the learning process may improve the performance of the learned classifier trained only on data.

  5. The effects of sampling on the efficiency and accuracy of k-mer indexes: Theoretical and empirical comparisons using the human genome.

    Almutairy, Meznah; Torng, Eric

    2017-01-01

    One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index such as BLAST. A big problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. Most previous work uses hard sampling, in which enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the number of stored k-mer occurrences at a cost of decreasing query accuracy. We focus on finding highly similar local alignments (HSLA) over nucleotide sequences, an operation that is fundamental to biological applications such as cDNA sequence mapping. For our comparison, we use the NCBI BLAST tool with the human genome and human ESTs. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. For the human genome and HSLAs of length at least 100 bp, soft sampling reduces index size 4-10 times more than hard sampling and processes queries 2.3-6.8 times faster, while still achieving retention rates of at least 96.6%. When we apply soft sampling to the problem of mapping ESTs against the genome, we map more than 98% of ESTs perfectly while reducing the index size by a factor of 4 and query time by 23.3%. These results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy by modeling two key problem factors.

  6. Noniterative algorithm for improving the accuracy of a multicolor-light-emitting-diode-based colorimeter

    Yang, Pao-Keng

    2012-05-01

    We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.

  7. Experimentally validated modification to Cook-Torrance BRDF model for improved accuracy

    Butler, Samuel D.; Ethridge, James A.; Nauyoks, Stephen E.; Marciniak, Michael A.

    2017-09-01

    The BRDF describes optical scatter off realistic surfaces. The microfacet BRDF model assumes geometric optics but is computationally simple compared to wave optics models. In this work, MERL BRDF data is fitted to the original Cook-Torrance microfacet model, and a modified Cook-Torrance model using the polarization factor in place of the mathematically problematic cross section conversion and geometric attenuation terms. The results provide experimental evidence that this modified Cook-Torrance model leads to improved fits, particularly for large incident and scattered angles. These results are expected to lead to more accurate BRDF modeling for remote sensing.

  8. Performance samples on academic tasks : improving prediction of academic performance

    Tanilon, Jenny

    2011-01-01

    This thesis is about the development and validation of a performance-based test, labeled as Performance Samples on academic tasks in Education and Child Studies (PSEd). PSEd is designed to identify students who are most able to perform the academic tasks involved in an Education and Child Studies

  9. Sampling in Qualitative Research: Improving the Quality of ...

    Sampling consideration in qualitative research is very important, yet in practice this appears not to be given the prominence and the rigour it deserves among Higher Education researchers. Accordingly, the quality of research outcomes in Higher Education has suffered from low utilisation. This has motivated the production ...

  10. Improved sample capsule for determination of oxygen in hemolyzed blood

    Malik, W. M.

    1967-01-01

    Sample capsule for determination of oxygen in hemolyzed blood consists of a measured section of polytetrafluoroethylene tubing equipped at each end with a connector and a stopcock valve. This method eliminates errors from air entrainment or from the use of mercury or syringe lubricant.

  11. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    Karaiskos, Pantelis, E-mail: pkaraisk@med.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens (Greece); Gamma Knife Department, Hygeia Hospital, Athens (Greece); Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos [Medical Physics Laboratory, Medical School, University of Athens (Greece); Roussakis, Arkadios [CT and MRI Department, Hygeia Hospital, Athens (Greece); Torrens, Michael [Gamma Knife Department, Hygeia Hospital, Athens (Greece); Seimenis, Ioannis [Medical Physics Laboratory, Medical School, Democritus University of Thrace, Alexandroupolis (Greece)

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  13. Green light may improve diagnostic accuracy of nailfold capillaroscopy with a simple digital videomicroscope.

    Weekenstroo, Harm H A; Cornelissen, Bart M W; Bernelot Moens, Hein J

    2015-06-01

    Nailfold capillaroscopy is a non-invasive and safe technique for the analysis of microangiopathologies. Imaging quality of widely used simple videomicroscopes is poor. The use of green illumination instead of the commonly used white light may improve contrast. The aim of the study was to compare the effect of green illumination with white illumination, regarding capillary density, the number of microangiopathologies, and sensitivity and specificity for systemic sclerosis. Five rheumatologists have evaluated 80 images; 40 images acquired with green light, and 40 images acquired with white light. A larger number of microangiopathologies were found in images acquired with green light than in images acquired with white light. This results in slightly higher sensitivity with green light in comparison with white light, without reducing the specificity. These findings suggest that green instead of white illumination may facilitate evaluation of capillaroscopic images obtained with a low-cost digital videomicroscope.

  14. Decentralised control method for DC microgrids with improved current sharing accuracy

    Yang, Jie; Jin, Xinmin; Wu, Xuezhi

    2017-01-01

    A decentralised control method that deals with current sharing issues in dc microgrids (MGs) is proposed in this study. The proposed method is formulated in terms of ‘modified global indicator’ concept, which was originally proposed to improve reactive power sharing in ac MGs. In this work......, the ‘modified global indicator’ concept is extended to coordinate dc MGs, which aims to preserve the main features offered by decentralised control methods such as no need of communication links, central controller or knowledge of the microgrid topology and parameters. This global indicator is inserted between...... a shunt virtual resistance. The operation under multiple dc-buses is also included in order to enhance the applicability of the proposed controller. A detailed mathematical model including the effect of network mismatches is derived for analysis of the stability of the proposed controller. The feasibility...

  15. Singular characteristic tracking algorithm for improved solution accuracy of the discrete ordinates method with isotropic scattering

    Duo, J. I.; Azmy, Y. Y.

    2007-01-01

    A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)

  16. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  17. Improved accuracy of cell surface shaving proteomics in Staphylococcus aureus using a false-positive control

    Solis, Nestor; Larsen, Martin Røssel; Cordwell, Stuart J

    2010-01-01

    Proteolytic treatment of intact bacterial cells is an ideal means for identifying surface-exposed peptide epitopes and has potential for the discovery of novel vaccine targets. Cell stability during such treatment, however, may become compromised and result in the release of intracellular proteins...... that complicate the final analysis. Staphylococcus aureus is a major human pathogen, causing community and hospital-acquired infections, and is a serious healthcare concern due to the increasing prevalence of multiple antibiotic resistances amongst clinical isolates. We employed a cell surface "shaving" technique...... to trypsin and three identified in the control. The use of a subtracted false-positive strategy improved enrichment of surface-exposed peptides in the trypsin data set to approximately 80% (124/155 peptides). Predominant surface proteins were those associated with methicillin resistance-surface protein SACOL...

  18. Improving ECG classification accuracy using an ensemble of neural network modules.

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  19. Improving the Accuracy of a Heliocentric Potential (HCP Prediction Model for the Aviation Radiation Dose

    Junga Hwang

    2016-12-01

    Full Text Available The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs, flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA. However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015. In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1 real-time daily sunspot assessments, (2 predictions of the daily HCP by our prediction algorithm, and (3 calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  20. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  1. Improvement of Accuracy in Flow Immunosensor System by Introduction of Poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl Phosphate

    Yusuke Fuchiwaki

    2011-01-01

    Full Text Available In order to improve the accuracy of immunosensor systems, poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl phosphate (poly-3MAm3AP, which includes both phosphorylcholine and amino groups, was synthesized and applied to the preparation of antibody-immobilized beads. Acting as an antibody-immobilizing material, poly-3MAm3AP is expected to significantly lower nonspecific adsorption due to the presence of the phosphorylcholine group and recognize large numbers of analytes due to the increase in antibody-immobilizing sites. The elimination of nonspecific adsorption was compared between the formation of a blocking layer on antibody-immobilized beads and the introduction of a material to combine antibody with beads. Determination with specific and nonspecific antibodies was then investigated for the estimation of signal-to-noise ratio. Signal intensities with superior signal-to-noise ratios were obtained when poly-3MAm3AP was introduced. This may be due to the increase in antibody-immobilizing sites and the extended space for antigen-antibody interaction resulting from the electrostatic repulsion of poly-3MAm3AP. Thus, the application of poly-3MAm3AP coatings to immunoassay beads was able to improve the accuracy of flow immunosensor systems.

  2. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  3. Improving wellbore position accuracy of horizontal wells by using a continuous inclination measurement from a near bit inclination MWD sensor

    Berger, P. E.; Sele, R. [Baker Hughes INTEQ (United States)

    1998-12-31

    Wellbore position calculations are typically performed by measuring azimuth and inclination at 10 to 30 meter intervals and using interpolation techniques to determine the borehole position between survey stations. The input parameters are measured depth (MD), azimuth and inclination, where the two parameters are measured with an MWD tool. Output parameters are the geometric coordinates; true value depth (TVD), north and east. By improving the accuracy of the inclination measurement reduces the uncertainty of the calculated TVD value, resulting in increased confidence in wellbore position. Significant improvements in quality control can be achieved by using multiple sensors. This paper describes a set of quality control parameters that can be used to verify individual sensor performance and a method for calculating TVD uncertainty in horizontal wells, using a single sensor or a combination of sensors. 6 refs., 5 figs.

  4. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.

  5. Multi-saline sample distillation apparatus for hydrogen isotope analyses: design and accuracy. Water-resources investigations

    Hassan, A.A.

    1981-04-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 degrees C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated

  6. Improving accuracy and reliability of 186-keV measurements for unattended enrichment monitoring

    Ianakiev, Kiril D.; Boyer, Brian D.; Swinhoe, Martyn T.; Moss, Cal E.; Goda, Joetta M.; Favalli, Andrea; Lombardi, Marcie; Paffet, Mark T.; Hill, Thomas R.; MacArthur, Duncan W.; Smith, Morag K.

    2010-01-01

    Improving the quality of safeguards measurements at Gas Centrifuge Enrichment Plants (GCEPs), whilst reducing the inspection effort, is an important objective given the number of existing and new plants that need to be safeguarded. A useful tool in many safeguards approaches is the on-line monitoring of enrichment in process pipes. One aspect of this measurement is a simple, reliable and precise passive measurement of the 186-keV line from 235 U. (The other information required is the amount of gas in the pipe. This can be obtained by transmission measurements or pressure measurements). In this paper we describe our research efforts towards such a passive measurement system. The system includes redundant measurements of the 186-keV line from the gas and separately from the wall deposits. The design also includes measures to reduce the effect of the potentially important background. Such an approach would practically eliminate false alarms and can maintain the operation of the system even with a hardware malfunction in one of the channels. The work involves Monte Carlo modeling and the construction of a proof-of-principle prototype. We will carry out experimental tests with UF 6 gas in pipes with and without deposits in order to demonstrate the deposit correction.

  7. Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values

    Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru

    Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.

  8. Improving the network infeed accuracy of non-dispatchable generators with energy storage devices

    Koeppel, Gaudenz; Korpaas, Magnus

    2008-01-01

    The power output of generators based on renewable energy sources is often difficult to predict due to the non-deterministic behaviour of the energy source. Particularly in the case of wind turbines this leads to unpredicted line loading and requires balancing energy, at relatively high costs, depending on market structures. Consequently, the income from the production from such non-dispatchable generators can be significantly reduced by the penalty costs incurred. This paper investigates the potential of operating an energy storage device in parallel with the non-dispatchable generator in order to compensate the inaccuracies of the forecasted infeed and to avoid infeed deviations. A time series based simulation methodology is discussed, suitable for any type of non-dispatchable generator. The methodology contains a procedure for simulating different forecast errors, applying an exponentially weighted moving average approach. Analysis procedures and system performance indices are introduced for the evaluation of the configuration's performance. The applicability is shown in two case studies, using measurement data from a wind turbine and from a photovoltaic system. Both case studies show that the suggested configuration considerably improves the reliability or dependability of the network infeed, in turn reducing the demand for balancing energy and back-up generation. The relation between forecast error magnitude and required energy capacity is identified and the coherence of the time series analysis is discussed. (author)

  9. Radiotherapy dosimetry audit: three decades of improving standards and accuracy in UK clinical practice and trials.

    Clark, Catharine H; Aird, Edwin G A; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia A D; Thomas, Russell A S; Venables, Karen; Thwaites, David I

    2015-01-01

    Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed.

  10. RAPID COMMUNICATION: Improving prediction accuracy of GPS satellite clocks with periodic variation behaviour

    Heo, Youn Jeong; Cho, Jeongho; Heo, Moon Beom

    2010-07-01

    The broadcast ephemeris and IGS ultra-rapid predicted (IGU-P) products are primarily available for use in real-time GPS applications. The IGU orbit precision has been remarkably improved since late 2007, but its clock products have not shown acceptably high-quality prediction performance. One reason for this fact is that satellite atomic clocks in space can be easily influenced by various factors such as temperature and environment and this leads to complicated aspects like periodic variations, which are not sufficiently described by conventional models. A more reliable prediction model is thus proposed in this paper in order to be utilized particularly in describing the periodic variation behaviour satisfactorily. The proposed prediction model for satellite clocks adds cyclic terms to overcome the periodic effects and adopts delay coordinate embedding, which offers the possibility of accessing linear or nonlinear coupling characteristics like satellite behaviour. The simulation results have shown that the proposed prediction model outperforms the IGU-P solutions at least on a daily basis.

  11. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Michael F Sloma

    2017-11-01

    Full Text Available Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  12. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs.

    Sloma, Michael F; Mathews, David H

    2017-11-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package.

  13. Improved Haptic Linear Lines for Better Movement Accuracy in Upper Limb Rehabilitation

    Joan De Boeck

    2012-01-01

    Full Text Available Force feedback has proven to be beneficial in the domain of robot-assisted rehabilitation. According to the patients' personal needs, the generated forces may either be used to assist, support, or oppose their movements. In our current research project, we focus onto the upper limb training for MS (multiple sclerosis and CVA (cerebrovascular accident patients, in which a basic building block to implement many rehabilitation exercises was found. This building block is a haptic linear path: a second-order continuous path, defined by a list of points in space. Earlier, different attempts have been investigated to realize haptic linear paths. In order to have a good training quality, it is important that the haptic simulation is continuous up to the second derivative while the patient is enforced to follow the path tightly, even when low or no guiding forces are provided. In this paper, we describe our best solution to these haptic linear paths, discuss the weaknesses found in practice, and propose and validate an improvement.

  14. Radiotherapy dosimetry audit: three decades of improving standards and accuracy in UK clinical practice and trials

    Aird, Edwin GA; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia AD; Thomas, Russell AS; Venables, Karen; Thwaites, David I

    2015-01-01

    Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed. PMID:26329469

  15. Improving the Preoperative Diagnostic Accuracy of Acute Appendicitis. Can Fecal Calprotectin Be Helpful?

    Peter C Ambe

    Full Text Available Is the patient really suffering from acute appendicitis? Right lower quadrant pain is the most common sign of acute appendicitis. However, many other bowels pathologies might mimic acute appendicitis. Due to fear of the consequences of delayed or missed diagnosis, the indication for emergency appendectomy is liberally made. This has been shown to be associated with high rates of negative appendectomy with risk of potentially serious or lethal complications. Thus there is need for a better preoperative screening of patients with suspected appendicitis.This prospective single center single-blinded pilot study was conducted in the Department of surgery at the HELIOS Universitätsklinikum Wuppertal, Germany. Calprotectin was measured in pre-therapeutic stool samples of patients presenting in the emergency department with pain to the right lower quadrant. Fecal calprotectin (FC values were analyzed using commercially available ELISA kits. Cut-off values for FC were studied using the receiver-operator characteristic (ROC curve. The Area under the curve (AUC was reported for each ROC curve.The mean FC value was 51.4 ± 118.8 μg/g in patients with AA, 320.9 ± 416.6 μg/g in patients with infectious enteritis and 24.8 ± 27.4 μg/g in the control group. ROC curve showed a close to 80% specificity and sensitivity of FC for AA at a cut-off value of 51 μg/g, AUC = 0.7. The sensitivity of FC at this cut-off value is zero for enteritis with a specificity of 35%.Fecal calprotectin could be helpful in screening patients with pain to the right lower quadrant for the presence of acute appendicitis or infectious enteritis with the aim of facilitating clinical decision-making and reducing the rate of negative appendectomy.

  16. Improved technical success and radiation safety of adrenal vein sampling using rapid, semi-quantitative point-of-care cortisol measurement.

    Page, Michael M; Taranto, Mario; Ramsay, Duncan; van Schie, Greg; Glendenning, Paul; Gillett, Melissa J; Vasikaran, Samuel D

    2018-01-01

    Objective Primary aldosteronism is a curable cause of hypertension which can be treated surgically or medically depending on the findings of adrenal vein sampling studies. Adrenal vein sampling studies are technically demanding with a high failure rate in many centres. The use of intraprocedural cortisol measurement could improve the success rates of adrenal vein sampling but may be impracticable due to cost and effects on procedural duration. Design Retrospective review of the results of adrenal vein sampling procedures since commencement of point-of-care cortisol measurement using a novel single-use semi-quantitative measuring device for cortisol, the adrenal vein sampling Accuracy Kit. Success rate and complications of adrenal vein sampling procedures before and after use of the adrenal vein sampling Accuracy Kit. Routine use of the adrenal vein sampling Accuracy Kit device for intraprocedural measurement of cortisol commenced in 2016. Results Technical success rate of adrenal vein sampling increased from 63% of 99 procedures to 90% of 48 procedures ( P = 0.0007) after implementation of the adrenal vein sampling Accuracy Kit. Failure of right adrenal vein cannulation was the main reason for an unsuccessful study. Radiation dose decreased from 34.2 Gy.cm 2 (interquartile range, 15.8-85.9) to 15.7 Gy.cm 2 (6.9-47.3) ( P = 0.009). No complications were noted, and implementation costs were minimal. Conclusions Point-of-care cortisol measurement during adrenal vein sampling improved cannulation success rates and reduced radiation exposure. The use of the adrenal vein sampling Accuracy Kit is now standard practice at our centre.

  17. Improved accuracy of component alignment with the implementation of image-free navigation in total knee arthroplasty.

    Rosenberger, Ralf E; Hoser, Christian; Quirbach, Sebastian; Attal, Rene; Hennerbichler, Alfred; Fink, Christian

    2008-03-01

    Accuracy of implant positioning and reconstruction of the mechanical leg axis are major requirements for achieving good long-term results in total knee arthroplasty (TKA). The purpose of the present study was to determine whether image-free computer navigation technology has the potential to improve the accuracy of component alignment in TKA cohorts of experienced surgeons immediately and constantly. One hundred patients with primary arthritis of the knee underwent the unilateral total knee arthroplasty. The cohort of 50 TKAs implanted with conventional instrumentation was directly followed by the cohort of the very first 50 computer-assisted TKAs. All surgeries were performed by two senior surgeons. All patients received the Zimmer NexGen total knee prosthesis (Zimmer Inc., Warsaw, IN, USA). There was no variability regarding surgeons or surgical technique, except for the use of the navigation system (StealthStation) Treon plus Medtronic Inc., Minnesota, MI, USA). Accuracy of implant positioning was measured on postoperative long-leg standing radiographs and standard lateral X-rays with regard to the valgus angle and the coronal and sagittal component angles. In addition, preoperative deformities of the mechanical leg axis, tourniquet time, age, and gender were correlated. Statistical analyses were performed using the SPSS 15.0 (SPSS Inc., Chicago, IL, USA) software package. Independent t-tests were used, with significance set at P alignment between the two cohorts. To compute the rate of optimally implanted prostheses between the two groups we used the chi(2) test. The average postoperative radiological frontal mechanical alignment was 1.88 degrees of varus (range 6.1 degrees of valgus-10.1 degrees of varus; SD 3.68 degrees ) in the conventional cohort and 0.28 degrees of varus (range 3.7 degrees -6.0 degrees of varus; SD 1.97 degrees ) in the navigated cohort. Including all criteria for optimal implant alignment, 16 cases (32%) in the conventional cohort and 31

  18. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  19. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.

  20. Accuracy of intermediate dose of furosemide injection to improve multidetector row CT urography

    Roy, Catherine; Jeantroux, Jeremy; Irani, Farah G.; Sauer, Benoit; Lang, Herve; Saussine, Christian

    2008-01-01

    Objective: Evaluate the usefulness of intermediate dose furosemide to improve visualization of the intrarenal collecting system and ureter using MDCTU. Materials and methods: Two groups of 100 patients without urinary tract disease or major abdominal pathology underwent MDCTU. Group I (various abdominal indications) was performed without any additional preparation and Group II (suspicion of urinary tract disease) 10 min after injection of furosemide (20 mg). MIP images of the excretory phase were post-processed. Maximal short-axis diameter of the pelvis and ureter were measured on axial images for all phases. Visualization of the collecting system wall and the identification of the whole ureter were assessed. Results: Mean pelvic diameter before contrast was (7.4 mm, S.D. ± 2.7; 13.4 mm, S.D. ± 4.1), on cortico-medullary phase (8.4 mm, S.D. ± 4.2; 14.3 mm, S.D. ± 4), on nephrographic phase (8.1 mm, S.D. ± 2.5; 14.8 mm, S.D. ± 4) and on excretory phase (9.7 mm, S.D. ± 3.4; 14.9 mm, S.D. ± 4.5), respectively, for Groups I and II. Intrarenal collecting system wall was clearly identified on both corticomedullary and nephrographic phases in 91% of Group II against 20% of Group I. Opacification of the entire ureter was excellent on excretory phase in 96% of Group II against 13% of Group I. The difference between the mean values for the two groups was statistically significant for all phases (p -9 ). Conclusion: Intermediate-dose furosemide (20 mg) before MDCTU is a very simple add-on for accurate depiction of pelvicalyceal details and collecting system wall without artefacts. The procedure is associated with a constant and complete visualisation of the entire urete

  1. Accuracy of intermediate dose of furosemide injection to improve multidetector row CT urography

    Roy, Catherine [Department of Radiology B, Universitary Hospital of Strasbourg-Civil Hospital, 1, Place de l' hopital BP 426, 67091 Strasbourg Cedex (France)], E-mail: catherine.roy@chru-strasbourg.fr; Jeantroux, Jeremy; Irani, Farah G.; Sauer, Benoit [Department of Radiology B, Universitary Hospital of Strasbourg-Civil Hospital, 1, Place de l' hopital BP 426, 67091 Strasbourg Cedex (France); Lang, Herve; Saussine, Christian [Department of Urology, Universitary Hospital of Strasbourg-Civil Hospital, 1, Place de l' hopital BP 426, 67091 Strasbourg Cedex (France)

    2008-05-15

    Objective: Evaluate the usefulness of intermediate dose furosemide to improve visualization of the intrarenal collecting system and ureter using MDCTU. Materials and methods: Two groups of 100 patients without urinary tract disease or major abdominal pathology underwent MDCTU. Group I (various abdominal indications) was performed without any additional preparation and Group II (suspicion of urinary tract disease) 10 min after injection of furosemide (20 mg). MIP images of the excretory phase were post-processed. Maximal short-axis diameter of the pelvis and ureter were measured on axial images for all phases. Visualization of the collecting system wall and the identification of the whole ureter were assessed. Results: Mean pelvic diameter before contrast was (7.4 mm, S.D. {+-} 2.7; 13.4 mm, S.D. {+-} 4.1), on cortico-medullary phase (8.4 mm, S.D. {+-} 4.2; 14.3 mm, S.D. {+-} 4), on nephrographic phase (8.1 mm, S.D. {+-} 2.5; 14.8 mm, S.D. {+-} 4) and on excretory phase (9.7 mm, S.D. {+-} 3.4; 14.9 mm, S.D. {+-} 4.5), respectively, for Groups I and II. Intrarenal collecting system wall was clearly identified on both corticomedullary and nephrographic phases in 91% of Group II against 20% of Group I. Opacification of the entire ureter was excellent on excretory phase in 96% of Group II against 13% of Group I. The difference between the mean values for the two groups was statistically significant for all phases (p < 10{sup -9}). Conclusion: Intermediate-dose furosemide (20 mg) before MDCTU is a very simple add-on for accurate depiction of pelvicalyceal details and collecting system wall without artefacts. The procedure is associated with a constant and complete visualisation of the entire urete.

  2. Improvements in PIXE analysis of hourly particulate matter samples

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Lucarelli, F. [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Nava, S. [National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Giannoni, M. [National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy); Carraresi, L. [Department of Physics and Astronomy, University of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); National Institute of Nuclear Physics (INFN), Division of Florence, Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Prati, P. [Department of Physics, University of Genoa and INFN Division of Genoa, Via Dodecaneso 33, 16146 Genoa (Italy); Vecchi, R. [Department of Physics, Università degli Studi di Milano and INFN Division of Milan, Via Celoria 16, 20133 Milan (Italy)

    2015-11-15

    Most air quality studies on particulate matter (PM) are based on 24-h averaged data; however, many PM emissions as well as their atmospheric dilution processes change within a few hours. Samplings of PM with 1-h resolution can be performed by the streaker sampler (PIXE International Corporation), which is designed to separate the fine (aerodynamic diameter less than 2.5 μm) and the coarse (aerodynamic diameter between 2.5 and 10 μm) fractions of PM. These samples are efficiently analyzed by Particle Induced X-ray Emission (PIXE) at the LABEC laboratory of INFN in Florence (Italy), equipped with a 3 MV Tandetron accelerator, thanks to an optimized external-beam set-up, a convenient choice of the beam energy and suitable collecting substrates. A detailed description of the adopted set-up and results from a methodological study on the detection limits for the selection of the optimal beam energy are shown; the outcomes of the research on alternative collecting substrates, which produce a lower background during the measurements, and with lower contaminations, are also discussed.

  3. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  4. Use of the recursive-rule extraction algorithm with continuous attributes to improve diagnostic accuracy in thyroid disease

    Yoichi Hayashi

    Full Text Available Thyroid diseases, which often lead to thyroid dysfunction involving either hypo- or hyperthyroidism, affect hundreds of millions of people worldwide, many of whom remain undiagnosed; however, diagnosis is difficult because symptoms are similar to those seen in a number of other conditions. The objective of this study was to assess the effectiveness of the Recursive-Rule Extraction (Re-RX algorithm with continuous attributes (Continuous Re-RX in extracting highly accurate, concise, and interpretable classification rules for the diagnosis of thyroid disease. We used the 7200-sample Thyroid dataset from the University of California Irvine Machine Learning Repository, a large and highly imbalanced dataset that comprises both discrete and continuous attributes. We trained the dataset using Continuous Re-RX, and after obtaining the maximum training and test accuracies, the number of extracted rules, and the average number of antecedents, we compared the results with those of other extraction methods. Our results suggested that Continuous Re-RX not only achieved the highest accuracy for diagnosing thyroid disease compared with the other methods, but also provided simple, concise, and interpretable rules. Based on these results, we believe that the use of Continuous Re-RX in machine learning may assist healthcare professionals in the diagnosis of thyroid disease. Keywords: Thyroid disease diagnosis, Re-RX algorithm, Rule extraction, Decision tree

  5. Improved gamma spectrometry of very low level radioactive samples

    Pineira, T.H.

    1989-01-01

    Today, many laboratories face the need to perform measurements of very low level activities using gamma spectroscopy. The techniques in use are identical to those applicable for higher levels of activities, but there is a need to use better adapted materials and modify the measurement conditions to minimize the background noise around the area. This paper presents the design of a very low level activity laboratory which has addressed the laboratory itself, the measuring chamber and the detector. The lab is constructed underground using specially selected materials of construction. The lab atmosphere is filtered and recycled with frequent changeovers. The rate of make-up fresh air is reduced and is sampled high above ground and filtered

  6. Accuracy improvements of gyro-based measurement-while-drilling surveying instruments by a laser testing method

    Li, Rong; Zhao, Jianhui; Li, Fan

    2009-07-01

    Gyroscope used as surveying sensor in the oil industry has been proposed as a good technique for measurement-whiledrilling (MWD) to provide real-time monitoring of the position and the orientation of the bottom hole assembly (BHA).However, drifts in the measurements provided by gyroscope might be prohibitive for the long-term utilization of the sensor. Some usual methods such as zero velocity update procedure (ZUPT) introduced to limit these drifts seem to be time-consuming and with limited effect. This study explored an in-drilling dynamic -alignment (IDA) method for MWD which utilizes gyroscope. During a directional drilling process, there are some minutes in the rotary drilling mode when the drill bit combined with drill pipe are rotated about the spin axis in a certain speed. This speed can be measured and used to determine and limit some drifts of the gyroscope which pay great effort to the deterioration in the long-term performance. A novel laser assembly is designed on the wellhead to count the rotating cycles of the drill pipe. With this provided angular velocity of the drill pipe, drifts of gyroscope measurements are translated into another form that can be easy tested and compensated. That allows better and faster alignment and limited drifts during the navigation process both of which can reduce long-term navigation errors, thus improving the overall accuracy in INS-based MWD system. This article concretely explores the novel device on the wellhead designed to test the rotation of the drill pipe. It is based on laser testing which is simple and not expensive by adding a laser emitter to the existing drilling equipment. Theoretical simulations and analytical approximations exploring the IDA idea have shown improvement in the accuracy of overall navigation and reduction in the time required to achieve convergence. Gyroscope accuracy along the axis is mainly improved. It is suggested to use the IDA idea in the rotary mode for alignment. Several other

  7. The requirement for proper storage of nuclear and related decommissioning samples to safeguard accuracy of tritium data.

    Kim, Daeji; Croudace, Ian W; Warwick, Phillip E

    2012-04-30

    Large volumes of potentially tritium-contaminated waste materials are generated during nuclear decommissioning that require accurate characterisation prior to final waste sentencing. The practice of initially determining a radionuclide waste fingerprint for materials from an operational area is often used to save time and money but tritium cannot be included because of its tendency to be chemically mobile. This mobility demands a specific measurement for tritium and also poses a challenge in terms of sampling, storage and reliable analysis. This study shows that the extent of any tritium redistribution during storage will depend on its form or speciation and the physical conditions of storage. Any weakly or moderately bound tritium (e.g. adsorbed water, waters of hydration or crystallisation) may be variably lost at temperatures over the range 100-300 °C whereas for more strongly bound tritium (e.g. chemically bound or held in mineral lattices) the liberation temperature can be delayed up to 800 °C. For tritium that is weakly held the emanation behaviour at different temperatures becomes particularly important. The degree of (3)H loss and cross-contamination that can arise after sampling and before analysis can be reduced by appropriate storage. Storing samples in vapour tight containers at the point of sampling, the use of triple enclosures, segregating high activity samples and using a freezer all lead to good analytical practice. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Improvement of correlated sampling Monte Carlo methods for reactivity calculations

    Nakagawa, Masayuki; Asaoka, Takumi

    1978-01-01

    Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

  9. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  10. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    Xiaoli Ding

    2009-02-01

    Full Text Available Interferometric Synthetic Aperture Radar (InSAR is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  11. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tome, Wolfgang A. [Department of Human Oncology, University of Wisconsin-Madison, WI, 53792 (United States); Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC 3002 (Australia) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Human Oncology, University of Wisconsin-Madison, WI 53792 (United States); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia) and Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur (Malaysia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Einstein Institute of Oncophysics, Albert Einstein College of Medicine of Yeshiva University, Bronx, New York 10461 (United States) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia)

    2012-08-15

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.

  12. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  13. General formula for on-axis sun-tracking system and its application in improving tracking accuracy of solar collector

    Chong, K.K.; Wong, C.W. [Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Off Jalan Genting Kelang, Setapak, 53300 Kuala Lumpur (Malaysia)

    2009-03-15

    Azimuth-elevation and tilt-roll tracking mechanism are among the most commonly used sun-tracking methods for aiming the solar collector towards the sun at all times. It has been many decades that each of these two sun-tracking methods has its own specific sun-tracking formula and they are not interrelated. In this paper, the most general form of sun-tracking formula that embraces all the possible on-axis tracking methods is presented. The general sun-tracking formula not only can provide a general mathematical solution, but more significantly it can improve the sun-tracking accuracy by tackling the installation error of the solar collector. (author)

  14. AST: an automated sequence-sampling method for improving the taxonomic diversity of gene phylogenetic trees.

    Zhou, Chan; Mao, Fenglou; Yin, Yanbin; Huang, Jinling; Gogarten, Johann Peter; Xu, Ying

    2014-01-01

    A challenge in phylogenetic inference of gene trees is how to properly sample a large pool of homologous sequences to derive a good representative subset of sequences. Such a need arises in various applications, e.g. when (1) accuracy-oriented phylogenetic reconstruction methods may not be able to deal with a large pool of sequences due to their high demand in computing resources; (2) applications analyzing a collection of gene trees may prefer to use trees with fewer operational taxonomic units (OTUs), for instance for the detection of horizontal gene transfer events by identifying phylogenetic conflicts; and (3) the pool of available sequences is biased towards extensively studied species. In the past, the creation of subsamples often relied on manual selection. Here we present an Automated sequence-Sampling method for improving the Taxonomic diversity of gene phylogenetic trees, AST, to obtain representative sequences that maximize the taxonomic diversity of the sampled sequences. To demonstrate the effectiveness of AST, we have tested it to solve four problems, namely, inference of the evolutionary histories of the small ribosomal subunit protein S5 of E. coli, 16 S ribosomal RNAs and glycosyl-transferase gene family 8, and a study of ancient horizontal gene transfers from bacteria to plants. Our results show that the resolution of our computational results is almost as good as that of manual inference by domain experts, hence making the tool generally useful to phylogenetic studies by non-phylogeny specialists. The program is available at http://csbl.bmb.uga.edu/~zhouchan/AST.php.

  15. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy.

    Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A

    2013-02-01

    for the weighted S-TPS-RPM. The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  16. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-01-01

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9–14 mm after rigid bone match to 0.9–4.0 mm, compared to a range of 1.1–9.1 mm with S-TPS-RPM of bladder alone and 0.9–9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  17. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  18. Patient-specific guides do not improve accuracy in total knee arthroplasty: a prospective randomized controlled trial.

    Victor, Jan; Dujardin, Jan; Vandenneucker, Hilde; Arnout, Nele; Bellemans, Johan

    2014-01-01

    Recently, patient-specific guides (PSGs) have been introduced, claiming a significant improvement in accuracy and reproducibility of component positioning in TKA. Despite intensive marketing by the manufacturers, this claim has not yet been confirmed in a controlled prospective trial. We (1) compared three-planar component alignment and overall coronal mechanical alignment between PSG and conventional instrumentation and (2) logged the need for applying changes in the suggested position of the PSG. In this randomized controlled trial, we enrolled 128 patients. In the PSG cohort, surgical navigation was used as an intraoperative control. When the suggested cut deviated more than 3° from target, the use of PSG was abandoned and marked as an outlier. When cranial-caudal position or size was adapted, the PSG was marked as modified. All patients underwent long-leg standing radiography and CT scan. Deviation of more than 3° from the target in any plane was defined as an outlier. The PSG and conventional cohorts showed similar numbers of outliers in overall coronal alignment (25% versus 28%; p = 0.69), femoral coronal alignment (7% versus 14%) (p = 0.24), and femoral axial alignment (23% versus 17%; p = 0.50). There were more outliers in tibial coronal (15% versus 3%; p = 0.03) and sagittal 21% versus 3%; p = 0.002) alignment in the PSG group than in the conventional group. PSGs were abandoned in 14 patients (22%) and modified in 18 (28%). PSGs do not improve accuracy in TKA and, in our experience, were somewhat impractical in that the procedure needed to be either modified or abandoned with some frequency.

  19. Reassessment of CT images to improve diagnostic accuracy in patients with suspected acute appendicitis and an equivocal preoperative CT interpretation

    Kim, Hyun Cheol; Yang, Dal Mo; Kim, Sang Won [Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Department of Radiology, Seoul (Korea, Republic of); Park, Seong Jin [Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Department of Radiology, Seoul (Korea, Republic of)

    2012-06-15

    To identify CT features that discriminate individuals with and without acute appendicitis in patients with equivocal CT findings, and to assess whether knowledge of these findings improves diagnostic accuracy. 53 patients that underwent appendectomy with an indeterminate preoperative CT interpretation were selected and allocated to an acute appendicitis group or a non-appendicitis group. The 53 CT examinations were reviewed by two radiologists in consensus to identify CT findings that could aid in the discrimination of those with and without appendicitis. In addition, two additional radiologists were then requested to evaluate independently the 53 CT examinations using a 4-point scale, both before and after being informed of the potentially discriminating criteria. CT findings found to be significantly different in the two groups were; the presence of appendiceal wall enhancement, intraluminal air in appendix, a coexistent inflammatory lesion, and appendiceal wall thickening (P < 0.05). Areas under the curves of reviewers 1 and 2 significantly increased from 0.516 and 0.706 to 0.677 and 0.841, respectively, when reviewers were told which CT variables were significant (P = 0.0193 and P = 0.0397, respectively). Knowledge of the identified CT findings was found to improve diagnostic accuracy for acute appendicitis in patients with equivocal CT findings. circle Numerous patients with clinically equivocal appendicitis do not have acute appendicitis circle Computed tomography (CT) helps to reduce the negative appendectomy rate circle CT is not always infallible and may also demonstrate indeterminate findings circle However knowledge of significant CT variables can further reduce negative appendectomy rate circle An equivocal CT interpretation of appendicitis should be reassessed with this knowledge. (orig.)

  20. Reassessment of CT images to improve diagnostic accuracy in patients with suspected acute appendicitis and an equivocal preoperative CT interpretation

    Kim, Hyun Cheol; Yang, Dal Mo; Kim, Sang Won; Park, Seong Jin

    2012-01-01

    To identify CT features that discriminate individuals with and without acute appendicitis in patients with equivocal CT findings, and to assess whether knowledge of these findings improves diagnostic accuracy. 53 patients that underwent appendectomy with an indeterminate preoperative CT interpretation were selected and allocated to an acute appendicitis group or a non-appendicitis group. The 53 CT examinations were reviewed by two radiologists in consensus to identify CT findings that could aid in the discrimination of those with and without appendicitis. In addition, two additional radiologists were then requested to evaluate independently the 53 CT examinations using a 4-point scale, both before and after being informed of the potentially discriminating criteria. CT findings found to be significantly different in the two groups were; the presence of appendiceal wall enhancement, intraluminal air in appendix, a coexistent inflammatory lesion, and appendiceal wall thickening (P < 0.05). Areas under the curves of reviewers 1 and 2 significantly increased from 0.516 and 0.706 to 0.677 and 0.841, respectively, when reviewers were told which CT variables were significant (P = 0.0193 and P = 0.0397, respectively). Knowledge of the identified CT findings was found to improve diagnostic accuracy for acute appendicitis in patients with equivocal CT findings. circle Numerous patients with clinically equivocal appendicitis do not have acute appendicitis circle Computed tomography (CT) helps to reduce the negative appendectomy rate circle CT is not always infallible and may also demonstrate indeterminate findings circle However knowledge of significant CT variables can further reduce negative appendectomy rate circle An equivocal CT interpretation of appendicitis should be reassessed with this knowledge. (orig.)

  1. Diagnostic accuracy of detection and quantification of HBV-DNA and HCV-RNA using dried blood spot (DBS) samples - a systematic review and meta-analysis.

    Lange, Berit; Roberts, Teri; Cohn, Jennifer; Greenman, Jamie; Camp, Johannes; Ishizaki, Azumi; Messac, Luke; Tuaillon, Edouard; van de Perre, Philippe; Pichler, Christine; Denkinger, Claudia M; Easterbrook, Philippa

    2017-11-01

    The detection and quantification of hepatitis B (HBV) DNA and hepatitis C (HCV) RNA in whole blood collected on dried blood spots (DBS) may facilitate access to diagnosis and treatment of HBV and HCV infection in resource-poor settings. We evaluated the diagnostic performance of DBS compared to venous blood samples for detection and quantification of HBV-DNA and HCV-RNA in two systematic reviews and meta-analyses on the diagnostic accuracy of HBV DNA and HCV RNA from DBS compared to venous blood samples. We searched MEDLINE, Embase, Global Health, Web of Science, LILAC and Cochrane library for studies that assessed diagnostic accuracy with DBS. Heterogeneity was assessed and where appropriate pooled estimates of sensitivity and specificity were generated using bivariate analyses with maximum likelihood estimates and 95% confidence intervals. We also conducted a narrative review on the impact of varying storage conditions or different cut-offs for detection from studies that undertook this in a subset of samples. The QUADAS-2 tool was used to assess risk of bias. In the quantitative synthesis for diagnostic accuracy of HBV-DNA using DBS, 521 citations were identified, and 12 studies met the inclusion criteria. Overall quality of studies was rated as low. The pooled estimate of sensitivity and specificity for HBV-DNA was 95% (95% CI: 83-99) and 99% (95% CI: 53-100), respectively. In the two studies that reported on cut-offs and limit of detection (LoD) - one reported a sensitivity of 98% for a cut-off of ≥2000 IU/ml and another reported a LoD of 914 IU/ml using a commercial assay. Varying storage conditions for individual samples did not result in a significant variation of results. In the synthesis for diagnostic accuracy of HCV-RNA using DBS, 15 studies met the inclusion criteria, and this included six additional studies to a previously published review. The pooled sensitivity and specificity was 98% (95% CI:95-99) and 98% (95% CI:95-99.0), respectively

  2. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  3. Determination of Selected Polycyclic Aromatic Compounds in Particulate Matter Samples with Low Mass Loading: An Approach to Test Method Accuracy

    Susana García-Alonso

    2017-01-01

    Full Text Available A miniaturized analytical procedure to determine selected polycyclic aromatic compounds (PACs in low mass loadings (<10 mg of particulate matter (PM is evaluated. The proposed method is based on a simple sonication/agitation method using small amounts of solvent for extraction. The use of a reduced sample size of particulate matter is often limiting for allowing the quantification of analytes. This also leads to the need for changing analytical procedures and evaluating its performance. The trueness and precision of the proposed method were tested using ambient air samples. Analytical results from the proposed method were compared with those of pressurized liquid and microwave extractions. Selected PACs (polycyclic aromatic hydrocarbons (PAHs and nitro polycyclic aromatic hydrocarbons (NPAHs were determined by liquid chromatography with fluorescence detection (HPLC/FD. Taking results from pressurized liquid extractions as reference values, recovery rates of sonication/agitation method were over 80% for the most abundant PAHs. Recovery rates of selected NPAHs were lower. Enhanced rates were obtained when methanol was used as a modifier. Intermediate precision was estimated by data comparison from two mathematical approaches: normalized difference data and pooled relative deviations. Intermediate precision was in the range of 10–20%. The effectiveness of the proposed method was evaluated in PM aerosol samples collected with very low mass loadings (<0.2 mg during characterization studies from turbofan engine exhausts.

  4. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tomé, Wolfgang A.

    2012-01-01

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed “Super Sampling” involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.

  5. The validity and reliability of the StationMaster: a device to improve the accuracy of station assessment in labour.

    Awan, Noveen; Rhoades, Anthony; Weeks, Andrew D

    2009-07-01

    To compare the accuracy of digital assessment and the StationMaster (SM) in the assessment of fetal head station. The SM is a simple modification of the amniotomy hook which works by relocating the point of reference for station assessment from the ischial spines to the posterior fourchette. It is first adjusted to the woman's pelvic size, and then inserted into the vagina until it touches the fetal head. The station is then read off at the posterior fourchette in cm. An in vitro study of test validity and reliability was conducted at Liverpool Women's Hospital, Liverpool, UK. An apparatus was constructed in which a model fetal head could be accurately positioned within a mannequin's pelvis. Twenty midwives and 20 doctors (in current labour ward practice) gave their consent to take part. First, the head was placed in 5 random stations (-2 to +7 cm) and the participant asked to record their digital assessment for each. The participant was then taught to use the SM and the experiment repeated with 5 new stations. The complete experiment was repeated at least 2 weeks later using the same stations but in reverse order. The true values were compared with both the digital and SM assessments using mean differences with 95% limits of agreement. The repeatability of the two methods was assessed in the same way. Overall, the SM was more accurate than digital examination. The mean error (S.D.) ranged from 0.1 (1.2) to 2.6 (1.6) for the StationMaster and 0.3 (1.3) to 4.3 (1.1) for digital examination. Inaccuracies increased as the head descended through the pelvis. When assessed digitally, the true value fell outside one standard deviation for stations of more than +1cm. In contrast, with the SM the true value remained inside one standard deviation for all stations up to +5. In vitro the SM improves the accuracy of intrapartum station assessment.

  6. Improved longitudinal length accuracy of gross tumor volume delineation with diffusion weighted magnetic resonance imaging for esophageal squamous cell carcinoma

    Hou, Dong-Liang; Shi, Gao-Feng; Gao, Xian-Shu; Asaumi, Junichi; Li, Xue-Ying; Liu, Hui; Yao, Chen; Chang, Joe Y

    2013-01-01

    To analyze the longitudinal length accuracy of gross tumor volume (GTV) delineation with diffusion weighted magnetic resonance imaging for esophageal squamous cell carcinoma (SCC). Forty-two patients from December 2011 to June 2012 with esophageal SCC who underwent radical surgery were analyzed. Routine computed tomography (CT) scan, T2-weighted MRI and diffusion weighted magnetic resonance imaging (DWI) were employed before surgery. Diffusion-sensitive gradient b-values were taken at 400, 600, and 800 s/mm 2 . Gross tumor volumes (GTV) were delineated using CT, T2-weighted MRI and DWI on different b-value images. GTV longitude length measured using the imaging modalities listed above was compared with pathologic lesion length to determine the most accurate imaging modality. CMS Xio radiotherapy planning system was used to fuse DWI scans and CT images to investigate the possibility of delineating GTV on fused images. The differences between the GTV length according to CT, T2-weighted MRI and pathology were 3.63 ± 12.06 mm and 3.46 ± 11.41 mm, respectively. When the diffusion-sensitive gradient b-value was 400, 600, and 800 s/mm 2 , the differences between the GTV length using DWI and pathology were 0.73 ± 6.09 mm, -0.54 ± 6.03 mm and −1.58 ± 5.71 mm, respectively. DWI scans and CT images were fused accurately using the radiotherapy planning system. GTV margins were depicted clearly on fused images. DWI displays esophageal SCC lengths most precisely when compared with CT or regular MRI. DWI scans fused with CT images can be used to improve accuracy to delineate GTV in esophageal SCC

  7. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-01-01

    Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. We collected two cervical specimens from 396 women, aged 25–65 years, who were enrolled in a cervical cancer screening trial. One sample was stored using DCM preservative solution and the other was applied to a Whatman Indicating FTA Elute® card (FTA card). All specimens were processed using three HPV testing methods, including Hybrid capture 2 (HC2), careHPV™, and Cobas®4800 tests. All the women underwent a rigorous colposcopic evaluation that included using a microbiopsy protocol. Compared to the liquid-based carrier, the FTA card demonstrated comparable sensitivity for detecting high grade Cervical Intraepithelial Neoplasia (CIN) using HC2 (91.7 %), careHPV™ (83.3 %), and Cobas®4800 (91.7 %) tests. Moreover, the FTA card showed a higher specificity compared to a liquid-based carrier for HC2 (79.5 % vs. 71.6 %, P = 0.015), comparable specificity for careHPV™ (78.1 % vs. 73.0 %, P > 0.05), but lower specificity for the Cobas®4800 test (62.4 % vs. 69.9 %, P = 0.032). Generally, the FTA card-based sampling medium’s accuracy was comparable with that of liquid-based medium for the three HPV testing assays. FTA cards are a promising sample carrier for cervical cancer screening. With further optimization, it can be utilized for HPV testing in areas of varying economic development

  8. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening.

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-11-04

    Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. We collected two cervical specimens from 396 women, aged 25-65 years, who were enrolled in a cervical cancer screening trial. One sample was stored using DCM preservative solution and the other was applied to a Whatman Indicating FTA Elute® card (FTA card). All specimens were processed using three HPV testing methods, including Hybrid capture 2 (HC2), careHPV™, and Cobas®4800 tests. All the women underwent a rigorous colposcopic evaluation that included using a microbiopsy protocol. Compared to the liquid-based carrier, the FTA card demonstrated comparable sensitivity for detecting high grade Cervical Intraepithelial Neoplasia (CIN) using HC2 (91.7 %), careHPV™ (83.3 %), and Cobas®4800 (91.7 %) tests. Moreover, the FTA card showed a higher specificity compared to a liquid-based carrier for HC2 (79.5 % vs. 71.6 %, P = 0.015), comparable specificity for careHPV™ (78.1 % vs. 73.0 %, P > 0.05), but lower specificity for the Cobas®4800 test (62.4 % vs. 69.9 %, P = 0.032). Generally, the FTA card-based sampling medium's accuracy was comparable with that of liquid-based medium for the three HPV testing assays. FTA cards are a promising sample carrier for cervical cancer screening. With further optimization, it can be utilized for HPV testing in areas of varying economic development.

  9. Temporary shielding of hot spots in the drainage areas of cutaneous melanoma improves accuracy of lymphoscintigraphic sentinel lymph node diagnostics

    Maza, S.; Valencia, R.; Geworski, L.; Zander, A.; Munz, D.L.; Draeger, E.; Winter, H.; Sterry, W.

    2002-01-01

    Detection of the ''true'' sentinel lymph nodes, permitting correct staging of regional lymph nodes, is essential for management and prognostic assessment in malignant melanoma. In this study, it w