WorldWideScience

Sample records for assay improves accuracy

  1. Implementing a technique to improve the accuracy of shuffler assays of waste drums

    Energy Technology Data Exchange (ETDEWEB)

    Rinard, P.M.

    1996-07-01

    The accuracy of shuffler assays for fissile materials is generally limited by the accuracy of the calibration standards, but when the matrix in a large drum has a sufficiently high hydrogen density (as exists in paper, for example) the accuracy in the active mode can be adversely affected by a nonuniform distribution of the fissile material within the matrix. This paper reports on a technique to determine the distribution nondestructively using delayed neutron signals generated by the shuffler itself. In assays employing this technique, correction factors are applied to the result of the conventional assay according to the distribution. Maximum inaccuracies in assays with a drum of paper, for example, are reduced by a factor of two or three.

  2. Sporadic Creutzfeldt-Jakob disease diagnostic accuracy is improved by a new CSF ELISA 14-3-3γ assay.

    Science.gov (United States)

    Leitão, M J; Baldeiras, I; Almeida, M R; Ribeiro, M H; Santos, A C; Ribeiro, M; Tomás, J; Rocha, S; Santana, I; Oliveira, C R

    2016-05-13

    Protein 14-3-3 is a reliable marker of rapid neuronal damage, specifically increased in cerebrospinal fluid (CSF) of sporadic Creutzfeldt-Jakob disease (sCJD) patients. Its detection is usually performed by Western Blot (WB), prone to methodological issues. Our aim was to evaluate the diagnostic performance of a recently developed quantitative enzyme-linked immunosorbent (ELISA) assay for 14-3-3γ, in comparison with WB and other neurodegeneration markers. CSF samples from 145 patients with suspicion of prion disease, later classified as definite sCJD (n=72) or Non-prion diseases (Non-CJD; n=73) comprised our population. 14-3-3 protein was determined by WB and ELISA. Total Tau (t-Tau) and phosphorylated Tau (p-Tau) were also evaluated. Apolipoprotein E gene (ApoE) and prionic protein gene (PRNP) genotyping was assessed. ELISA 14-3-3γ levels were significantly increased in sCJD compared to Non-CJD patients (p<0.001), showing very good accuracy (AUC=0.982; sensitivity=97%; specificity=94%), and matching WB results in 81% of all cases. It strongly correlated with t-Tau and p-Tau (p<0.0001), showing slightly higher specificity (14-3-3 WB - 63%; Tau - 90%; p-Tau/t-Tau ratio - 88%). From WB inconclusive results (n=44), ELISA 14-3-3γ correctly classified 41 patients. Additionally, logistic regression analysis selected ELISA 14-3-3γ as the best single predictive marker for sCJD (overall accuracy=93%). ApoE and PRNP genotypes did not influence ELISA 14-3-3γ levels. Despite specificity for 14-3-3γ isoform, ELISA results not only match WB evaluation but also help discrimination of inconclusive results. Our results therefore reinforce this assay as a single screening test, allowing higher sample throughput and unequivocal results. PMID:26940479

  3. Improving Speaking Accuracy through Awareness

    Science.gov (United States)

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  4. Improving accuracy of holes honing

    Directory of Open Access Journals (Sweden)

    Ivan М. Buykli

    2015-03-01

    Full Text Available Currently, in precision engineering industry tolerances for linear dimensions and tolerances on shape of surfaces of processing parts are steadily tightened These requirements are especially relevant in processing of holes. Aim of the research is to improve accuracy and to enhance the technological capabilities of holes honing process and, particularly, of blind holes honing. Based on formal logic the analysis of formation of processing errors is executed on the basis of consideration of schemes of irregularity of dimensional wear and tear along the length of the cutting elements. With this, the possibilities of compensating this irregularities and, accordingly, of control of accuracy of processing applied to the honing of both throughout and blind holes are specified. At the same time, a new method of honing is developed, it is protected by the patent of Ukraine for invention. The method can be implemented both on an existing machine tools at insignificant modernization of its system of processing cycle control and on newly designed ones.

  5. Algorithms for improving accuracy of spray simulation

    Institute of Scientific and Technical Information of China (English)

    ZHANG HuiYa; ZHANG YuSheng; XIAO HeLin; XU Bo

    2007-01-01

    Fuel spray is the pivotal process of direct injection engine combustion. The accuracy of spray simulation determines the reliability of combustion calculation. However, the traditional techniques of spray simulation in KIVA and commercial CFD codes are very susceptible to grid resolution. As a consequence, predicted engine performance and emission can depend on the computational mesh. The two main causes of this problem are the droplet collision algorithm and coupling between gas and liquid phases. In order to improve the accuracy of spray simulation, the original KIVA code is modified using the cross mesh droplet collision (CMC) algorithm and gas phase velocity interpolation algorithm. In the constant volume apparatus and D.I. Diesel engine, the improvements of the modified KIVA code in spray simulation accuracy are checked from spray structure, predicted average drop size and spray tip penetration, respectively. The results show a dramatic decrease in grid dependency. With these changes, the distorted phenomenon of spray structure is vanished. The uncertainty in predicted average drop size is reduced from 30 to 5 μm in constant volume apparatus calculation, and the uncertainty is further reduced to 2 μm in an engine simulation. The predicted spray tip penetrations in engine simulation also have better consistency in medium and fine meshes.

  6. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  7. Improvements on the accuracy of beam bugs

    International Nuclear Information System (INIS)

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughout linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug

  8. Improvements on the accuracy of beam bugs

    International Nuclear Information System (INIS)

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as beam bugs, have been used throughout linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug

  9. Improving the Accuracy of Cosmic Magnification Statistics

    CERN Document Server

    Ménard, B; Yoshida, M B N; Menard, Brice; Hamana, Takashi; Yoshida, Matthias Bartelmann & Naoki

    2003-01-01

    The systematic magnification of background sources by the weak gravitational-lensing effects of foreground matter, also called cosmic magnification, is becoming an efficient tool both for measuring cosmological parameters and for exploring the distribution of galaxies relative to the dark matter. We extend here the formalism of magnification statistics by estimating the contribution of second-order terms in the Taylor expansion of the magnification and show that the effect of these terms was previously underestimated. We test our analytical predictions against numerical simulations and demonstrate that including second-order terms allows the accuracy of magnification-related statistics to be substantially improved. We also show, however, that both numerical and analytical estimates can provide only lower bounds to real correlation functions, even in the weak lensing regime. We propose to use count-in-cells estimators rather than correlation functions for measuring cosmic magnification since they can more easi...

  10. IMPROVED ACCURACY AND ROUGHNESS MEASURES FOR ROUGH SETS

    Institute of Scientific and Technical Information of China (English)

    Zhou Yuming; Xu Baowen

    2002-01-01

    Accuracy and roughness, proposed by Pawlak(1982), might draw a conclusion inconsistent with our intuition in some cases. This letter analyzes the limitations in these measures and proposes improved accuracy and roughness measures based on information theory.

  11. Improved internal control for molecular diagnosis assays.

    Science.gov (United States)

    Vinayagamoorthy, T; Maryanski, Danielle; Vinayagamoorthy, Dilanthi; Hay, Katie S L; Yo, Jacob; Carter, Mark; Wiegel, Joseph

    2015-01-01

    The two principal determining steps in molecular diagnosis are the amplification and the identification steps. Accuracy of DNA amplification is primarily determined by the annealing sequence of the PCR primer to the analyte DNA. Accuracy for identification is determined either by the annealing region of a labelled probe for the real time PCR analysis, or the annealing of a sequencing primer for DNA sequencing analysis, that binds to the respective analyte (amplicon). Presently, housekeeping genes (Beta globin, GAPDH) are used in molecular diagnosis to verify that the PCR conditions are optimum, and are thus known as amplification controls [1-4]. Although these genes have been useful as amplification controls, they lack the true definition of an internal control because the primers and annealing conditions are not identical to the analyte being assayed. This may result in a false negative report [5]. The IC-Code platform technology described here provides a true internal control where the internal control and analyte share identical PCR primers annealing sequences for the amplification step and identical sequencing primer annealing sequence for the identification step. •The analyte and internal control have the same PCR and sequencing annealing sequences.•This method assures for little or no false negatives and false positives due to the method's design of using identical annealing conditions for the internal control and analyte, and by using DNA sequencing analysis for the identification step of the analyte, respectively.•This method also allows for a set lower limit of detection to be used by varying the amount of internal control used in the assay.

  12. Improved accuracy in nano beam electron diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Beche, A; Rouviere, J-L [CEA, INAC, SP2M, LEMMA, 17 rue des Martyrs, F-38054 Grenoble Cedex 9 (France); Clement, L, E-mail: armand.beche@cea.f, E-mail: jean-luc.rouviere@cea.f [ST Microelectronics, 850 rue Jean Monnet, F-38920 Crolles (France)

    2010-02-01

    Nano beam electron diffraction (NBD or NBED) is applied on a well controlled sample in order to evaluate the limit of the technique to measure strain. Measurements are realised on a 27nm thick Si{sub 0.7}Ge{sub 0.3} layer embedded in a silicon matrix, with a TITAN microscope working at 300kV. Using a standard condenser aperture of 50{mu}m, a probe size diameter of 2.7 nm is obtained and a strain accuracy of 6x10{sup -4} (mean root square, rms) is achieved. NBED patterns are acquired along a [110] direction and the bidimensionnal strain in the (110) plane is measured. Finite element simulations are carried out to check experimental results and reveal that strain relaxation and probe averaging in a 170nm thick TEM lamella reduces strain by 15%.

  13. Improving the accuracy of walking piezo motors.

    Science.gov (United States)

    den Heijer, M; Fokkema, V; Saedi, A; Schakel, P; Rost, M J

    2014-05-01

    Many application areas require ultraprecise, stiff, and compact actuator systems with a high positioning resolution in combination with a large range as well as a high holding and pushing force. One promising solution to meet these conflicting requirements is a walking piezo motor that works with two pairs of piezo elements such that the movement is taken over by one pair, once the other pair reaches its maximum travel distance. A resolution in the pm-range can be achieved, if operating the motor within the travel range of one piezo pair. However, applying the typical walking drive signals, we measure jumps in the displacement up to 2.4 μm, when the movement is given over from one piezo pair to the other. We analyze the reason for these large jumps and propose improved drive signals. The implementation of our new drive signals reduces the jumps to less than 42 nm and makes the motor ideally suitable to operate as a coarse approach motor in an ultra-high vacuum scanning tunneling microscope. The rigidity of the motor is reflected in its high pushing force of 6.4 N.

  14. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  15. Improving Localization Accuracy: Successive Measurements Error Modeling.

    Science.gov (United States)

    Ali, Najah Abu; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle's future position and its past positions, and then propose a -order Gauss-Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss-Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle's future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  16. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Science.gov (United States)

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  17. Improvement of Electrochemical Machining Accuracy by Using Dual Pole Tool

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Electrochemical machining (ECM) is one of the best al ternatives for producing complex shapes in advanced materials used in aircraft a nd aerospace industries. However, the reduction of the stray material removal co ntinues to be major challenges for industries in addressing accuracy improvement . This study presents a method of improving machining accuracy in ECM by using a dual pole tool with a metallic bush outside the insulated coating of a cathode tool. The bush is connected with anode and so the el...

  18. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    OpenAIRE

    Lee, Seoung-Hyeon; Lim, Il-Kwan; Lee, Jae-Kwang

    2016-01-01

    Beacons using bluetooth low-energy (BLE) technology have emerged as a new paradigm of indoor positioning service (IPS) because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy ...

  19. Improved benzodiazepine radioreceptor assay using the MultiScreen (R) Assay System

    NARCIS (Netherlands)

    Janssen, MJ; Ensing, K; de Zeeuw, RA

    1999-01-01

    In this article, an improved benzodiazepine radioreceptor assay is described, which allows substantial reduction in assay time, The filtration in this method was performed by using the MultiScreen(R) Assay System. The latter consists of a 96-well plate with glass fibre filters sealed at the bottom,

  20. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    Directory of Open Access Journals (Sweden)

    Harada Hideo

    2015-01-01

    Full Text Available Improvement of accuracy of neutron nuclear data for minor actinides (MAs and long-lived fission products (LLFPs is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as “Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC” has been started as one of the “Innovative Nuclear Research and Development Program” in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  1. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  2. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    Science.gov (United States)

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in developing informed priors and further

  3. Method for Improving the Ranging Accuracy of Radio Fuze

    Institute of Scientific and Technical Information of China (English)

    HU Xiu-juan; DENG Jia-hao; SANG Hui-ping

    2006-01-01

    Stepped frequency radar waveform is put forward for improving the accuracy of radio fuze ranging. IFFT is adopted to synthesize one dimension high resolution range profile. Furthermore, the same range reject method and selection maximum method are made use of removing target redundancy and the simulation results are given. Characters of the two methods are analyzed, and under the proposal of Weibull distribution clutter envelope, the CFAR same range selection maximum method is adopted and realizes the accurate profile and ranging.

  4. Improving short term load forecast accuracy via combining sister forecasts

    OpenAIRE

    Jakub Nowotarski; Bidong Liu; Rafal Weron; Tao Hong

    2015-01-01

    Although combining forecasts is well-known to be an effective approach to improving forecast accuracy, the literature and case studies on combining load forecasts are very limited. In this paper, we investigate the performance of combining so-called sister load forecasts with eight methods: three variants of arithmetic averaging, four regression based and one performance based method. Through comprehensive analysis of two case studies developed from public data (Global Energy Forecasting Comp...

  5. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  6. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  7. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    Science.gov (United States)

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  8. A study of the conditions and accuracy of the thrombin time assay of plasma fibrinogen

    DEFF Research Database (Denmark)

    Jespersen, J; Sidelmann, Johannes Jakobsen

    1982-01-01

    The conditions, accuracy, precision and possible error of the thrombin time assay of plasma fibrinogen are determined. Comparison with an estimation of clottable protein by absorbance at 280 nm gave a correlation coefficient of 0.96 and the regression line y = 1.00 x + 0.56 (n = 34). Comparison...... with a radial immunodiffusion method yielded the correlation coefficient 0.97 and the regression line y = 1.18 x = 2.47 (n = 26). The presence of heparin in clinically applied concentrations produced a slight shortening of the clotting times. The resulting error in the estimated concentrations of fibrinogen...... was too small to affect the clinical usefulness of the determinations. The influence of fibrin(ogen) degradation products was significant only in excessive amounts in samples containing low levels of fibrinogen....

  9. APPROACH TO IMPROVEMENT OF ROBOT TRAJECTORY ACCURACY BY DYNAMIC COMPENSATION

    Institute of Scientific and Technical Information of China (English)

    Wang Gang; Ren Guoli; Yan Xiang'an; Wang Guodong

    2004-01-01

    Some dynamic factors, such as inertial forces and friction, may affect the robot trajectory accuracy. But these effects are not taken into account in robot motion control schemes. Dynamic control methods, on the other hand, require the dynamic model of robot and the implementation of new type controller. A method to improve robot trajectory accuracy by dynamic compensation in robot motion control system is proposed. The dynamic compensation is applied as an additional velocity feedforward and a multilayer neural network is employed to realize the robot inverse dynamics. The complicated dynamic parameter identification problem becomes a learning process of neural network connecting weights under supervision. The finite Fourier series is used to activate each actuator of robot joints for obtaining training samples. Robot control system, consisting of an industrial computer and a digital motion controller, is implemented. The system is of open architecture with velocity feedforward function. The proposed method is not model-based and combines the advantages of close-loop position control and computed torque control. Experimental results have shown that the method is validatities to improve the robot trajectory accuracy.

  10. Three-dimensional display improves observer speed and accuracy

    International Nuclear Information System (INIS)

    In an effort to evaluate the potential cost-effectiveness of three-dimensional (3D) display equipment, we compared the speed and accuracy of experienced radiologists identifying in sliced uppercase letters from CT scans with 2D and pseudo-3D display. CT scans of six capital letters were obtained and printed as a 2D display or as a synthesized pseudo-3D display (Pixar). Six observes performed a timed identification task. Radiologists read the 3D display an average of 16 times faster than the 2D, and the average error rate of 2/6 (± 0.6/6) for 2D interpretations was totally eliminated. This degree of improvement in speed and accuracy suggests that the expense of 3D display may be cost-effective in a defined clinical setting

  11. Accuracy of the Fluorescence-Activated Cell Sorting Assay for the Aquaporin-4 Antibody (AQP4-Ab): Comparison with the Commercial AQP4-Ab Assay Kit

    Science.gov (United States)

    Kim, Yoo-Jin; Cheon, So Young; Kim, Boram; Jung, Kyeong Cheon; Park, Kyung Seok

    2016-01-01

    Background The aquaporin-4 antibody (AQP4-Ab) is a disease-specific autoantibody to neuromyelitis optica (NMO). We aimed to evaluate the accuracy of the FACS assay in detecting the AQP4-Ab compared with the commercial cell-based assay (C-CBA) kit. Methods Human embryonic kidney-293 cells were transfected with human aquaporin-4 (M23) cDNA. The optimal cut off values of FACS assay was tested using 1123 serum samples from patients with clinically definite NMO, those at high risk for NMO, patients with multiple sclerosis, patients with other idiopathic inflammatory demyelinating diseases, and negative controls. The accuracy of FACS assay and C-CBA were compared in consecutive 225 samples that were collected between January 2014 and June 2014. Results With a cut-off value of MFIi of 3.5 and MFIr of 2.0, the receiver operating characteristic curve for the FACS assay showed an area under the curve of 0.876. Among 225 consecutive sera, the FACS assay and C-CBA had a sensitivity of 77.3% and 69.7%, respectively, in differentiating the sera of definite NMO patients from sera of controls without IDD or of MS. Both assay had a good specificity of 100% in it. The overall positivity of the C-CBA among FACS-positive sera was 81.5%; moreover, its positivity was low as 50% among FACS-positive sera with relatively low MFIis. Conclusions Both the FACS assay and C-CBA are sensitive and highly specific assays in detecting AQP4-Ab. However, in some sera with relatively low antibody titer, FACS-assay can be a more sensitive assay option. In real practice, complementary use of FACS assay and C-CBA will benefit the diagnosis of NMO patients, because the former can be more sensitive among low titer sera and the latter are easier to use therefore can be widely used. PMID:27658059

  12. Improving Forecasting Accuracy in the Case of Intermittent Demand Forecasting

    Directory of Open Access Journals (Sweden)

    Daisuke Takeyasu

    2014-06-01

    Full Text Available In making forecasting, there are many kinds of data. Stationary time series data are relatively easy to make forecasting but random data are very difficult in its execution for forecasting. Intermittent data are often seen in industries. But it is rather difficult to make forecasting in general. In recent years, the needs for intermittent demand forecasting are increasing because of the constraints of strict Supply Chain Management. How to improve the forecasting accuracy is an important issue. There are many researches made on this. But there are rooms for improvement. In this paper, a new method for cumulative forecasting method is proposed. The data is cumulated and to this cumulated time series, the following method is applied to improve the forecasting accuracy. Trend removing by the combination of linear and 2nd order non-linear function and 3rd order non-linear function is executed to the production data of X-ray image intensifier tube device and Diagnostic X-ray image processing apparatus. The forecasting result is compared with those of the non-cumulative forecasting method. The new method shows that it is useful for the forecasting of intermittent demand data. The effectiveness of this method should be examined in various cases.

  13. Improving Iris Recognition Accuracy By Score Based Fusion Method

    CERN Document Server

    Gawande, Ujwalla; Kapur, Avichal

    2010-01-01

    Iris recognition technology, used to identify individuals by photographing the iris of their eye, has become popular in security applications because of its ease of use, accuracy, and safety in controlling access to high-security areas. Fusion of multiple algorithms for biometric verification performance improvement has received considerable attention. The proposed method combines the zero-crossing 1 D wavelet Euler number, and genetic algorithm based for feature extraction. The output from these three algorithms is normalized and their score are fused to decide whether the user is genuine or imposter. This new strategies is discussed in this paper, in order to compute a multimodal combined score.

  14. Accuracy of bronchoalveolar lavage enzyme-linked immunospot assay to diagnose smear-negative tuberculosis: a meta-analysis

    OpenAIRE

    Li, Zhenzhen; Qin, Wenzhe; Li, Lei; WU, QIN; Chen, Xuerong

    2015-01-01

    Purpose: While the bronchoalveolar lavage enzyme-linked immunospot assay (BAL-ELISPOT) shows promise for diagnosing smear-negative tuberculosis, its accuracy remains controversial. We meta-analyzed the available evidence to obtain a clearer understanding of the diagnostic accuracy. Methods: Studies of the diagnostic performance of ELI-SPOT on smear-negative tuberculosis were identified through systematic searches of the PubMed and EMBASE databases. Pooled data on sensitivity, specificity and ...

  15. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  16. Selecting fillers on emotional appearance improves lineup identification accuracy.

    Science.gov (United States)

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  17. CADASTRAL POSITIONING ACCURACY IMPROVEMENT: A CASE STUDY IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    N. M. Hashim

    2016-09-01

    Full Text Available Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM. With the growth of spatial based technology especially Geographical Information System (GIS, DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI in cadastral database modernization.

  18. Accuracy improvement of geometric correction for CHRIS data

    Institute of Scientific and Technical Information of China (English)

    WANG Dian-zhong; PANG Yong; GUO Zhi-feng

    2008-01-01

    This paper deals with a new type of multi-angle remotely sensed data---CHRIS (the Compact High Resolution Imaging Spectrometer), by using rational function models (RFM) and rigorous sensor models (RSM). For ortho-rectifying an image set, a rigorous sensor model-Toutin's model was employed and a set of reported parameters including across track angle, along track angle, IFOV, altitude, period, eccentricity and orbit inclination were input, then, the orbit calculation was started and the track information was given to the raw data. The images were ortho-rectified with geocoded ASTER images and digital elevation (DEM) data. Results showed that with 16 ground control points (GCPs), the correction accuracy decreased with view zenith angle, and the RMSE value increased to be over one pixel at 36 degree off-nadir. When the GCPs were approximately chosen as in Toutin's model, a RFM with three coefficients produced the same accuracy trend versus view zenith angle while the RMSEs for all angles were improved and within about one pixel.

  19. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    Science.gov (United States)

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from 65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children 65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  20. Euler Deconvolution with Improved Accuracy and Multiple Different Structural Indices

    Institute of Scientific and Technical Information of China (English)

    G R J Cooper

    2008-01-01

    Euler deconvolution is a semi-automatic interpretation method that is frequently used with magnetic and gravity data. For a given source type, which is specified by its structural index (SI), it provides an estimate of the source location. It is demonstrated here that by computing the solution space of individual data points and selecting common source locations the accuracy of the result can be improved. Furthermore, only a slight modification of the method is necessary to allow solutions for any number of different Sis to be obtained simultaneously. The method is applicable to both evenly and unevenly sampled geophysical data and is demonstrated on gravity and magnetic data. Source code (in Matlab format) is available from www.iamg.org.

  1. Accuracy Improvement on the Measurement of Human-Joint Angles.

    Science.gov (United States)

    Meng, Dai; Shoepe, Todd; Vejarano, Gustavo

    2016-03-01

    A measurement technique that decreases the root mean square error (RMSE) of measurements of human-joint angles using a personal wireless sensor network is reported. Its operation is based on virtual rotations of wireless sensors worn by the user, and it focuses on the arm, whose position is measured on 5 degree of freedom (DOF). The wireless sensors use inertial magnetic units that measure the alignment of the arm with the earth's gravity and magnetic fields. Due to the biomechanical properties of human tissue (e.g., skin's elasticity), the sensors' orientation is shifted, and this shift affects the accuracy of measurements. In the proposed technique, the change of orientation is first modeled from linear regressions of data collected from 15 participants at different arm positions. Then, out of eight body indices measured with dual-energy X-ray absorptiometry, the percentage of body fat is found to have the greatest correlation with the rate of change in sensors' orientation. This finding enables us to estimate the change in sensors' orientation from the user's body fat percentage. Finally, an algorithm virtually rotates the sensors using quaternion theory with the objective of reducing the error. The proposed technique is validated with experiments on five different participants. In the DOF, whose error decreased the most, the RMSE decreased from 2.20(°) to 0.87(°). This is an improvement of 60%, and in the DOF whose error decreased the least, the RMSE decreased from 1.64(°) to 1.37(°). This is an improvement of 16%. On an average, the RMSE improved by 44%. PMID:25622331

  2. Accuracy Improvement for Predicting Parkinson’s Disease Progression

    Science.gov (United States)

    Nilashi, Mehrbakhsh; Ibrahim, Othman; Ahani, Ali

    2016-01-01

    Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease. PMID:27686748

  3. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  4. An improved europium release assay for complement-mediated cytolysis.

    Science.gov (United States)

    Cui, J; Bystryn, J C

    1992-02-14

    An improved assay for complement-mediated cytolysis is described. The target cells are labeled with europium complexed to diethylenetriaminopentaacetate (Eu-DTPA). Cytolysis caused by antibody plus complement leads to the release of the Eu-DTPA complex which is then formed into a highly fluorescent chelate by the addition of 2-naphthoyltrifluoroacetone (2-NTA). The amount of europium chelate formed--a measurement of cell death--is then quantified with a time-resolved fluorometer. The results of the assay are reproducible. Complement-mediated cytolysis when measured by europium release was five times more sensitive than when measured by conventional 51Cr release and three times than when measured by trypan blue exclusion. Because europium does not decay, target cells can be labelled in batches and stored frozen until use, which speeds and simplifies the assay. Thus, europium release assay is a simple and quantitative method to measure complement-mediated cytolysis which is sensitive and more rapid than conventional assays. PMID:1541836

  5. Accuracy of pitch matching significantly improved by live voice model.

    Science.gov (United States)

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech. PMID:23528675

  6. A Novel Navigation Robustness and Accuracy Improvement System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To address NASA's need for L1 C/A-based navigation with better anti-spoofing ability and higher accuracy, Broadata Communications, Inc. (BCI) proposes to develop a...

  7. An RFID implementation in the automotive industry - improving inventory accuracy

    OpenAIRE

    Hellström, Daniel; Wiberg, Mathias

    2010-01-01

    This paper explores and describes the impact of radio frequency identification (RFID) technology on inventory accuracy within a production and assembly plant, and proposes a model for assessing the impact of the technology on inventory accuracy. The empirical investigation, based on case study research, focuses on a RFID implementation at a supplier of bumper and spoiler systems to the automotive industry. The results indicate that RFID ensures that inventory inaccurac...

  8. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    Science.gov (United States)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  9. Multimodal Biometric Systems - Study to Improve Accuracy and Performance

    CERN Document Server

    Sasidhar, K; Ramakrishna, Kolikipogu; KailasaRao, K

    2010-01-01

    Biometrics is the science and technology of measuring and analyzing biological data of human body, extracting a feature set from the acquired data, and comparing this set against to the template set in the database. Experimental studies show that Unimodal biometric systems had many disadvantages regarding performance and accuracy. Multimodal biometric systems perform better than unimodal biometric systems and are popular even more complex also. We examine the accuracy and performance of multimodal biometric authentication systems using state of the art Commercial Off- The-Shelf (COTS) products. Here we discuss fingerprint and face biometric systems, decision and fusion techniques used in these systems. We also discuss their advantage over unimodal biometric systems.

  10. Improving Accuracy of Sleep Self-Reports through Correspondence Training

    Science.gov (United States)

    St. Peter, Claire C.; Montgomery-Downs, Hawley E.; Massullo, Joel P.

    2012-01-01

    Sleep insufficiency is a major public health concern, yet the accuracy of self-reported sleep measures is often poor. Self-report may be useful when direct measurement of nonverbal behavior is impossible, infeasible, or undesirable, as it may be with sleep measurement. We used feedback and positive reinforcement within a small-n multiple-baseline…

  11. On combining reference data to improve imputation accuracy.

    Directory of Open Access Journals (Sweden)

    Jun Chen

    Full Text Available Genotype imputation is an important tool in human genetics studies, which uses reference sets with known genotypes and prior knowledge on linkage disequilibrium and recombination rates to infer un-typed alleles for human genetic variations at a low cost. The reference sets used by current imputation approaches are based on HapMap data, and/or based on recently available next-generation sequencing (NGS data such as data generated by the 1000 Genomes Project. However, with different coverage and call rates for different NGS data sets, how to integrate NGS data sets of different accuracy as well as previously available reference data as references in imputation is not an easy task and has not been systematically investigated. In this study, we performed a comprehensive assessment of three strategies on using NGS data and previously available reference data in genotype imputation for both simulated data and empirical data, in order to obtain guidelines for optimal reference set construction. Briefly, we considered three strategies: strategy 1 uses one NGS data as a reference; strategy 2 imputes samples by using multiple individual data sets of different accuracy as independent references and then combines the imputed samples with samples based on the high accuracy reference selected when overlapping occurs; and strategy 3 combines multiple available data sets as a single reference after imputing each other. We used three software (MACH, IMPUTE2 and BEAGLE for assessing the performances of these three strategies. Our results show that strategy 2 and strategy 3 have higher imputation accuracy than strategy 1. Particularly, strategy 2 is the best strategy across all the conditions that we have investigated, producing the best accuracy of imputation for rare variant. Our study is helpful in guiding application of imputation methods in next generation association analyses.

  12. TRAVEL-TIME SOURCE SPECIFIC STATION CORRECTION IMPROVES LOCATION ACCURACY

    OpenAIRE

    Alessandra, Giuntini; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma2, Roma, Italia; Valerio, Materni; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma2, Roma, Italia; Stefano, Chiappini; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma2, Roma, Italia; Roberto, Carluccio; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma2, Roma, Italia; Rodolfo, Console; Massimo, Chiappini; Istituto Nazionale di Geofisica e Vulcanologia, Sezione Roma2, Roma, Italia

    2013-01-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the s...

  13. Machine learning improves the accuracy of myocardial perfusion scintigraphy results

    International Nuclear Information System (INIS)

    Objective: Machine learning (ML) an artificial intelligence method has in last decade proved to be an useful tool in many fields of decision making, also in some fields of medicine. By reports, its decision accuracy usually exceeds the human one. Aim: To assess applicability of ML in interpretation of the stress myocardial perfusion scintigraphy results in coronary artery disease diagnostic process. Patients and methods: The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy of the investigation were computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate with whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy of scintigraphy were expressed in this way. The results of both decision procedures were compared. Conclusion: Using ML method, 19 more patients out of 327 (5.8%) were correctly diagnosed by stress myocardial perfusion scintigraphy. In this way ML could be an important tool for myocardial perfusion scintigraphy decision making

  14. The Trade-Off between Accuracy and Accessibility of Syphilis Screening Assays

    OpenAIRE

    Smit, Pieter W.; David Mabey; John Changalucha; Julius Mngara; Benjamin Clark; Aura Andreasen; Jim Todd; Mark Urassa; Basia Zaba; Peeling, Rosanna W.

    2013-01-01

    The availability of rapid and sensitive methods to diagnose syphilis facilitates screening of pregnant women, which is one of the most cost-effective health interventions available. We have evaluated two screening methods in Tanzania: an enzyme immunoassay (EIA), and a point-of-care test (POCT). We evaluated the performance of each test against the Treponema pallidum particle agglutination assay (TPPA) as the reference method, and the accessibility of testing in a rural district of Tanzania. ...

  15. The trade-off between accuracy and accessibility of syphilis screening assays.

    Directory of Open Access Journals (Sweden)

    Pieter W Smit

    Full Text Available The availability of rapid and sensitive methods to diagnose syphilis facilitates screening of pregnant women, which is one of the most cost-effective health interventions available. We have evaluated two screening methods in Tanzania: an enzyme immunoassay (EIA, and a point-of-care test (POCT. We evaluated the performance of each test against the Treponema pallidum particle agglutination assay (TPPA as the reference method, and the accessibility of testing in a rural district of Tanzania. The POCT was performed in the clinic on whole blood, while the other assays were performed on plasma in the laboratory. Samples were also tested by the rapid plasma Reagin (RPR test. With TPPA as reference assay, the sensitivity and specificity of EIA were 95.3% and 97.8%, and of the POCT were 59.6% and 99.4% respectively. The sensitivity of the POCT and EIA for active syphilis cases (TPPA positive and RPR titer ≥ 1/8 were 82% and 100% respectively. Only 15% of antenatal clinic attenders in this district visited a health facility with a laboratory capable of performing the EIA. Although it is less sensitive than EIA, its greater accessibility, and the fact that treatment can be given on the same day, means that the use of POCT would result in a higher proportion of women with syphilis receiving treatment than with the EIA in this district of Tanzania.

  16. An improved in vitro micronucleus assay to biological dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Ocampo, Ivette Z.; Okazaki, Kayo; Vieira, Daniel P., E-mail: dpvieira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), So Paulo, SP (Brazil)

    2013-07-01

    The biological dosimetry is widely used to estimate the absorbed dose in people occupationally or accidentally exposed to the radiation for a better medical treatment, minimizing the harmful effects. Many techniques and methods have been proposed to detect and quantify the radioinduced lesions in genetic material, among them, the micronucleus (MN) assay. In the present study, we proposed an improved in vitro micronucleus technique that is rapid, sensitive and with minor cell manipulations. Assays were carried out with human tumor cells (MCF-7) seeded (3x10{sup 4} cells) in slides placed into Petri dishes. Adherent cells were maintained with RPMI medium, supplemented with fetal calf serum, 1 % antibiotics, cytochalasin B (2 μg/mL), and incubated at 37 deg C in the presence of 5% CO2 for 72h. Cells were pre-treated for 24h with aminoguanidine, a nitric oxide synthase inhibitor. Nitric oxide is an intracellular free-radical, involved in DNA double-strand break repair mechanisms. After incubation, adherent cells on slides were briefly fixed with paraformaldehyde and stained with acridine orange (100 μg/mL) for analysis through fluorescence microscopy. Dye fluorescence permitted accurate discrimination between nuclei and micronuclei (bright green) and cytoplasm (red), and made possible a faster counting of binucleated cells. Aminoguanidine (2 mM) induced significant increase (p< 0.05) in frequencies of binucleated cells with micronuclei and in the number of micronuclei per binucleated cell. Data showed that proposed modifications permit to understand an early aspect of NO inhibition and suggested an improved protocol to MN assays. (author)

  17. Diagnostic accuracy of a loop-mediated isothermal PCR assay for detection of Orientia tsutsugamushi during acute Scrub Typhus infection.

    Directory of Open Access Journals (Sweden)

    Daniel H Paris

    2011-09-01

    Full Text Available BACKGROUND: There is an urgent need to develop rapid and accurate point-of-care (POC technologies for acute scrub typhus diagnosis in low-resource, primary health care settings to guide clinical therapy. METHODOLOGY/PRINCIPAL FINDINGS: In this study we present the clinical evaluation of loop-mediated isothermal PCR assay (LAMP in the context of a prospective fever study, including 161 patients from scrub typhus-endemic Chiang Rai, northern Thailand. A robust reference comparator set comprising following 'scrub typhus infection criteria' (STIC was used: a positive cell culture isolate and/or b an admission IgM titer ≥1∶12,800 using the 'gold standard' indirect immunofluorescence assay (IFA and/or c a 4-fold rising IFA IgM titer and/or d a positive result in at least two out of three PCR assays. Compared to the STIC criteria, all PCR assays (including LAMP demonstrated high specificity ranging from 96-99%, with sensitivities varying from 40% to 56%, similar to the antibody based rapid test, which had a sensitivity of 47% and a specificity of 95%. CONCLUSIONS/SIGNIFICANCE: The diagnostic accuracy of the LAMP assay was similar to realtime and nested conventional PCR assays, but superior to the antibody-based rapid test in the early disease course. The combination of DNA- and antibody-based detection methods increased sensitivity with minimal reduction of specificity, and expanded the timeframe of adequate diagnostic coverage throughout the acute phase of scrub typhus.

  18. Improving accuracy of in situ gamma-ray spectrometry

    OpenAIRE

    Boson, Jonas

    2008-01-01

    Gamma-ray spectrometry measurements performed on site, or “in situ”, is a widely used and powerful method that can be employed both to identify and quantify ground deposited radionuclides. The purpose of this thesis is to improve the calibration of high purity germanium (HPGe) detectors for in situ measurements, and calculate the combined uncertainty and potential systematic effects. An improved semi-empirical calibration method is presented, based on a novel expression for the intrinsic dete...

  19. AN EVALUATION OF USA UNEMPLOYMENT RATE FORECASTS IN TERMS OF ACCURACY AND BIAS. EMPIRICAL METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    BRATU (SIMIONESCU MIHAELA

    2013-02-01

    Full Text Available The most accurate forecasts for USA unemployment rate on the horizon 2001-2012, according to U1 Theil’s coefficient and to multi-criteria ranking methods, were provided by International Monetary Fund (IMF, being followed by other institutions as: Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC. The multi-criteria ranking methods were applied to solve the divergence in assessing the accuracy, differences observed by computing five chosen measures of accuracy: U1 and U2 statistics of Theil, mean error, mean squared error, root mean squared error. Some strategies of improving the accuracy of the predictions provided by the four institutions, which are biased in all cases, excepting BC, were proposed. However, these methods did not generate unbiased forecasts. The predictions made by IMF and OECD for 2001-2012 can be improved by constructing combined forecasts, the INV approach and the scheme proposed by author providing the most accurate expections. The BC forecasts can be improved by smoothing the predictions using Holt-Winters method and Hodrick - Prescott filter.

  20. A Cascaded Fingerprint Quality Assessment Scheme for Improved System Accuracy

    Directory of Open Access Journals (Sweden)

    Zia Saquib

    2011-03-01

    Full Text Available Poor-quality images mostly result in spurious or missing features, which further degrade the overall performance of fingerprint recognition systems. This paper proposes a reconfigurable scheme of quality checks at two different levels: i at raw image level and ii at feature level. At first level, ellipse properties are calculated through analysis of statistical attributes of the captured raw image. At second level, the singularity points (core and delta are identified and extracted (if any. These information, as quality measures, are used in a cascaded manner to block/pass the image. This model is tested on both publicly available (Cross Match Verifier 300 sensor as well as proprietary (Lumidigm Venus V100 OEM Module sensor fingerprint databases scanned at 500 dpi. The experimental results show that this cascaded arrangement of quality barricades could correctly block poor quality images and hence elevated the overall system accuracy: with quality checks, both FNMR and FMR significantly dropped to 9.52% and 0.26% respectively for Cross Match Dataset and 2.17% and 2.16% respectively for Lumidigm Dataset.

  1. Singing Video Games May Help Improve Pitch-Matching Accuracy

    Science.gov (United States)

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  2. Image processing for improved eye-tracking accuracy

    Science.gov (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  3. Does naming accuracy improve through self-monitoring of errors?

    Science.gov (United States)

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  4. Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.

    Science.gov (United States)

    Pearce, John A

    2015-12-01

    The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented

  5. Improving the Accuracy of Estimation of Climate Extremes

    Science.gov (United States)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  6. SPHGal: Smoothed Particle Hydrodynamics with improved accuracy for Galaxy simulations

    CERN Document Server

    Hu, Chia-Yu; Walch, Stefanie; Moster, Benjamin P; Oser, Ludwig

    2014-01-01

    We present the smoothed-particle hydrodynamics implementation SPHGal which incorporates several recent developments into the GADGET code. This includes a pressure-entropy formulation of SPH with a Wendland kernel, a higher order estimate of velocity gradients, a modified artificial viscosity switch with a strong limiter, and artificial conduction of thermal energy. We conduct a series of idealized hydrodynamic tests and show that while the pressure-entropy formulation is ideal for resolving fluid mixing at contact discontinuities, it performs conspicuously worse when strong shocks are involved due to the large entropy discontinuities. Including artificial conduction at shocks greatly improves the results. The Kelvin-Helmholtz instability can be resolved properly and dense clouds in the blob test dissolve qualitatively in agreement with other improved SPH implementations. We further perform simulations of an isolated Milky Way like disk galaxy and find a feedback-induced instability developing if too much arti...

  7. Thermal dynamics on the lattice with exponentially improved accuracy

    CERN Document Server

    Pawlowski, Jan

    2016-01-01

    We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies $\\omega_n=2\\pi n T$. In particular it resolves their frequency dependence between $\\omega=0$ and $\\omega_1=2\\pi T$, where the thermal physics $\\omega\\sim T$ of e.g.~transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so their unfolding is exponentially improved compared to Euclidean simulations. We demonstrate this improvement within a $0+1$-dimensional scalar field theory and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.

  8. Improved DORIS accuracy for precise orbit determination and geodesy

    Science.gov (United States)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  9. Diagnostic accuracy of microscopic Observation Drug Susceptibility (MODS assay for pediatric tuberculosis in Hanoi, Vietnam.

    Directory of Open Access Journals (Sweden)

    Sinh Thi Tran

    Full Text Available INTRODUCTION: Microscopic [corrected] Observation Drug Susceptibility (MODS has been shown to be an effective and rapid technique for early diagnosis of tuberculosis (TB. Thus far only a limited number of studies evaluating MODS have been performed in children and in extra-pulmonary tuberculosis. This study aims to assess relative accuracy and time to positive culture of MODS for TB diagnosis in children admitted to a general pediatric hospital in Vietnam. METHODS/PRINCIPAL FINDINGS: Specimens from children with suspected TB were tested by smear, MODS and Lowenstein-Jensen agar (LJ. 1129 samples from 705 children were analyzed, including sputum (n=59, gastric aspirate (n=775, CSF (n=148, pleural fluid (n=33, BAL (n=41, tracheal fluid (n=45, other (n=28. 113 TB cases were defined based on the "clinical diagnosis" (confirmed and probable groups as the reference standard, in which 26% (n=30 were diagnosed as extra-pulmonary TB. Analysis by patient shows that the overall sensitivity and specificity of smear, LJ and MODS against "clinical diagnosis" was 8.8% and 100%, 38.9% and 100%, 46% and 99.5% respectively with MODS significantly more sensitive than LJ culture (P=0.02. When analyzed by sample type, the sensitivity of MODS was significantly higher than LJ for gastric aspirates (P=0.004. The time to detection was also significantly shorter for MODS than LJ (7 days versus 32 days, P<0.001. CONCLUSION: MODS [corrected] is a sensitive and rapid culture technique for detecting TB in children. As MODS culture can be performed at a BSL2 facility and is inexpensive, it can therefore be recommended as a routine test for children with symptoms suggestive of TB in resource-limited settings.

  10. Improving the accuracy of maternal mortality and pregnancy related death.

    Science.gov (United States)

    Schaible, Burk

    2014-01-01

    Comparing abortion-related death and pregnancy-related death remains difficult due to the limitations within the Abortion Mortality Surveillance System and the International Statistical Classification of Diseases and Related Health Problems (ICD). These methods lack a systematic and comprehensive method of collecting complete records regarding abortion outcomes in each state and fail to properly identify longitudinal cause of death related to induced abortion. This article seeks to analyze the current method of comparing abortion-related death with pregnancy-related death and provide solutions to improve data collection regarding these subjects.

  11. VERIFICATION AND IMPROVING PLANIMETRIC ACCURACY OF AIRBORNE LASER SCANNING DATA WITH USING PHOTOGRAMMETRIC DATA

    OpenAIRE

    Bakuła, K.; Dominik, W.; Ostrowski, W.

    2014-01-01

    In this study results of planimetric accuracy of LIDAR data were verified with application of intensity of laser beam reflection and point cloud modelling results. Presented research was the basis for improving the accuracy of the products from the processing of LIDAR data, what is particularly important in issues related to surveying measurements. In the experiment, the true-ortho from the large-format aerial images with known exterior orientation were used to check the planimetric accuracy ...

  12. Improving the accuracy of canal seepage detection through geospatial techniques

    Science.gov (United States)

    Arshad, Muhammad

    With climatic change, many western states in the United States are experiencing drought conditions. Numerous irrigation districts are losing significant amount of water from their canal systems due to leakage. Every year, on the average 2 million acres of prime cropland in the US is lost to soil erosion, waterlogging and salinity. Lining of canals could save enormous amount of water for irrigating crops but in present time due to soaring costs of construction and environmental mitigation, adopting such program on a large scale would be excessive. Conventional techniques of seepage detection are expensive, time consuming and labor intensive besides being not very accurate. Technological advancements in remote sensing have made it possible to investigate irrigation canals for seepage sites identification. In this research, band-9 in the [NIR] region and band-45 in the [TIR] region of an airborne MASTER data has been utilized to highlight anomalies along irrigation canal at Phoenix, Arizona. High resolution (1 to 4 meter pixels) satellite images provided by private companies for scientific research and made available by Google to the public on Google Earth is then successfully used to separate those anomalies into water activity sites, natural vegetation, and man-made structures and thereby greatly improving the seepage detection ability of airborne remote sensing. This innovative technique is much faster and cost effective as compared to conventional techniques and past airborne remote sensing techniques for verification of anomalies along irrigation canals. This technique also solves one of the long standing problems of discriminating false impression of seepage sites due to dense natural vegetation, terrain relief and low depressions of natural drainages from true water related activity sites.

  13. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  14. A priori estimation of accuracy and of the number of wells to be employed in limiting dilution assays

    Directory of Open Access Journals (Sweden)

    J.G. Chaui-Berlinck

    2000-08-01

    Full Text Available The use of limiting dilution assay (LDA for assessing the frequency of responders in a cell population is a method extensively used by immunologists. A series of studies addressing the statistical method of choice in an LDA have been published. However, none of these studies has addressed the point of how many wells should be employed in a given assay. The objective of this study was to demonstrate how a researcher can predict the number of wells that should be employed in order to obtain results with a given accuracy, and, therefore, to help in choosing a better experimental design to fulfill one's expectations. We present the rationale underlying the expected relative error computation based on simple binomial distributions. A series of simulated in machina experiments were performed to test the validity of the a priori computation of expected errors, thus confirming the predictions. The step-by-step procedure of the relative error estimation is given. We also discuss the constraints under which an LDA must be performed.

  15. Collaborative study for the validation of an improved HPLC assay for recombinant IFN-alfa-2.

    Science.gov (United States)

    Jönsson, K H; Daas, A; Buchheit, K H; Terao, E

    2016-01-01

    The current European Pharmacopoeia (Ph. Eur.) texts for Interferon (IFN)-alfa-2 include a nonspecific photometric protein assay using albumin as calibrator and a highly variable cell-based assay for the potency determination of the protective effects. A request was expressed by the Official Medicines Control Laboratories (OMCLs) for improved methods for the batch control of recombinant interferon alfa-2 bulk and market surveillance testing of finished products, including those formulated with Human Serum Albumin (HSA). A HPLC method was developed at the Medical Products Agency (MPA, Sweden) for the testing of IFN-alfa-2 products. An initial collaborative study run under the Biological Standardisation Programme (BSP; study code BSP039) revealed the need for minor changes to improve linearity of the calibration curves, assay reproducibility and robustness. The goal of the collaborative study, coded BSP071, was to transfer and further validate this improved HPLC method. Ten laboratories participated in the study. Four marketed IFN-alfa-2 preparations (one containing HSA) together with the Ph. Eur. Chemical Reference Substance (CRS) for IFN-alfa-2a and IFN-alfa-2b, and in-house reference standards from two manufacturers were used for the quantitative assay. The modified method was successfully transferred to all laboratories despite local variation in equipment. The resolution between the main and the oxidised forms of IFN-alfa-2 was improved compared to the results from the BSP039 study. The improved method even allowed partial resolution of an extra peak after the principal peak. Symmetry of the main IFN peak was acceptable for all samples in all laboratories. Calibration curves established with the Ph. Eur. IFN-alfa-2a and IFN-alfa-2b CRSs showed excellent linearity with intercepts close to the origin and coefficients of determination greater than 0.9995. Assay repeatability, intermediate precision and reproducibility varied with the tested sample within acceptable

  16. Static beacons based indoor positioning method for improving room-level accuracy

    OpenAIRE

    Miekk-oja, Ville

    2015-01-01

    Demand for indoor positioning applications has been growing lately. Indoor positioning is used for example in hospitals for patient tracking, and in airports for finding correct gates. Requirements in indoor positioning have become more strict with demands for a higher accuracy. This thesis presents a method for improving the room-level accuracy of a positioning system by using static beacons. As a static beacon, Bluetooth low energy modules will be used to test how much they can improve...

  17. A Method to Improve Mineral Identification Accuracy Based on Hyperspectral Data

    International Nuclear Information System (INIS)

    To improve the mineral identification accuracy of the rapid quantificational identification model, the noise was filtered in fragment based on the wavelength of altered mineral absorption peak and the regional spectral library that fitted for the study area was established. The filtered spectra were analyzed by the method with regional spectral library. Compared with the originally mineral identification result, the average efficiency rate was improved by 5.1%; the average accuracy rate was improved by 17.7%. The results were optimized by the method based on the position of the altered mineral absorption peak. The average efficiency rate would be improved in the future to identify more accurate minerals

  18. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Ota, Keishin, E-mail: ota@microphase.co.jp [Microphase Co., Ltd., Onigakubo 1147-9, Tsukuba, Ibaragi 300-2651 (Japan); Matsuda, Tsuyoshi [Japan Science and Technology Agency, Kawaguchi-shi, Saitama 332-0012 (Japan); Tonomura, Akira [Advanced Science Institute, RIKEN, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Okinawa Institute of Science and Technology, Graduate University, Kunigami, Okinawa 904-0495 (Japan); Central Research Laboratory, Hitachi, Ltd., Hatoyama, Saitama 350-0395 (Japan)

    2012-07-15

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5. -- Highlights: Black-Right-Pointing-Pointer A modified phase-shifting electron holography was proposed. Black-Right-Pointing-Pointer The time variation of mean intensity and contrast of holograms were corrected. Black-Right-Pointing-Pointer These corrections lead to a great improvement of the resultant phase accuracy. Black-Right-Pointing-Pointer A phase accuracy of about 1/4000 rad was achieved from experimental results.

  19. Improvement of Accuracy in Damage Localization Using Frequency Slice Wavelet Transform

    OpenAIRE

    Xinglong Liu; Zhongwei Jiang; Zhonghong Yan

    2012-01-01

    Damage localization is a primary objective of damage identification. This paper presents damage localization in beam structure using impact-induced Lamb wave and Frequency Slice Wavelet Transform (FSWT). FSWT is a new time-frequency analysis method and has the adaptive resolution feature. The time-frequency resolution is a vital factor affecting the accuracy of damage localization. In FSWT there is a unique parameter controlling the time-frequency resolution. To improve the accuracy of damage...

  20. Improved assay for measuring heparin binding to bull sperm

    International Nuclear Information System (INIS)

    The binding of heparin to sperm has been used to study capacitation and to rank relative fertility of bulls. Previous binding assays were laborious, used 107 sperm per assay point, and required large amounts of radiolabeled heparin. A modified heparin-binding assay is described that used only 5 x 104 cells per incubation well and required reduced amounts of [3H] heparin. The assay was performed in 96-well Millititer plates, enabling easy incubation and filtering. Dissociation constants and concentrations of binding sites did not differ if analyzed by Scatchard plots, Woolf plots, or by log-logit transformed weighted nonlinear least squares regression, except in the case of outliers. In such cases, Scatchard analysis was more sensitive to outliers. Nonspecific binding was insignificant using nonlinear logistic fit regression and a proportion graph. The effects were tested of multiple free-thawing of sperm in either a commercial egg yolk extender, 40 mM Tris buffer with 8% glycerol, or 40 mM Tris buffer without glycerol. Freeze-thawing in extender did not affect the dissociation constant or the concentration of binding sites. However, freeze-thawing three times in 40 mM Tris reduced the concentration of binding sites and lowered the dissociation constant (raised the affinity). The inclusion of glycerol in the 40 mM Tris did not significantly affect the estimated dissociation constant or the concentration of binding sites as compared to 40 mM Tris without glycerol

  1. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.

    Science.gov (United States)

    Thompson, Clarissa A; Opfer, John E

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  2. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  3. Training readers to improve their accuracy in grading Crohn's disease activity on MRI

    International Nuclear Information System (INIS)

    To prospectively evaluate if training with direct feedback improves grading accuracy of inexperienced readers for Crohn's disease activity on magnetic resonance imaging (MRI). Thirty-one inexperienced readers assessed 25 cases as a baseline set. Subsequently, all readers received training and assessed 100 cases with direct feedback per case, randomly assigned to four sets of 25 cases. The cases in set 4 were identical to the baseline set. Grading accuracy, understaging, overstaging, mean reading times and confidence scores (scale 0-10) were compared between baseline and set 4, and between the four consecutive sets with feedback. Proportions of grading accuracy, understaging and overstaging per set were compared using logistic regression analyses. Mean reading times and confidence scores were compared by t-tests. Grading accuracy increased from 66 % (95 % CI, 56-74 %) at baseline to 75 % (95 % CI, 66-81 %) in set 4 (P = 0.003). Understaging decreased from 15 % (95 % CI, 9-23 %) to 7 % (95 % CI, 3-14 %) (P < 0.001). Overstaging did not change significantly (20 % vs 19 %). Mean reading time decreased from 6 min 37 s to 4 min 35 s (P < 0.001). Mean confidence increased from 6.90 to 7.65 (P < 0.001). During training, overall grading accuracy, understaging, mean reading times and confidence scores improved gradually. Inexperienced readers need training with at least 100 cases to achieve the literature reported grading accuracy of 75 %. (orig.)

  4. Qualification of standard membrane-feeding assay with Plasmodium falciparum malaria and potential improvements for future assays.

    Directory of Open Access Journals (Sweden)

    Kazutoyo Miura

    Full Text Available Vaccines that interrupt malaria transmission are of increasing interest and a robust functional assay to measure this activity would promote their development by providing a biologically relevant means of evaluating potential vaccine candidates. Therefore, we aimed to qualify the standard membrane-feeding assay (SMFA. The assay measures the transmission-blocking activity of antibodies by feeding cultured P. falciparum gametocytes to Anopheles mosquitoes in the presence of the test antibodies and measuring subsequent mosquito infection. The International Conference on Harmonisation (ICH Harmonised Tripartite Guideline Q2(R1 details characteristics considered in assay validation. Of these characteristics, we decided to qualify the SMFA for Precision, Linearity, Range and Specificity. The transmission-blocking 4B7 monoclonal antibody was tested over 6 feeding experiments at several concentrations to determine four suitable concentrations that were tested in triplicate in the qualification experiments (3 additional feeds to evaluate Precision, Linearity and Range. For Specificity, 4B7 was tested in the presence of normal mouse IgG. We determined intra- and inter-assay variability of % inhibition of mean oocyst intensity at each concentration of 4B7 (lower concentrations showed higher variability. We also showed that % inhibition was dependent on 4B7 concentration and the activity is specific to 4B7. Since obtaining empirical data is time-consuming, we generated a model using data from all 9 feeds and simulated the effects of different parameters on final readouts to improve the assay procedure and analytical methods for future studies. For example, we estimated the effect of number of mosquitoes dissected on variability of % inhibition, and simulated the relationship between % inhibition in oocyst intensity and % inhibition of prevalence of infected mosquitos at different mean oocysts in the control. SMFA is one of the few biological assays used in

  5. Improvement of Accuracy in Damage Localization Using Frequency Slice Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Xinglong Liu

    2012-01-01

    Full Text Available Damage localization is a primary objective of damage identification. This paper presents damage localization in beam structure using impact-induced Lamb wave and Frequency Slice Wavelet Transform (FSWT. FSWT is a new time-frequency analysis method and has the adaptive resolution feature. The time-frequency resolution is a vital factor affecting the accuracy of damage localization. In FSWT there is a unique parameter controlling the time-frequency resolution. To improve the accuracy of damage localization, a generalized criterion is proposed to determine the parameter value for achieving a suitable time-frequency resolution. For damage localization, the group velocity dispersion curve (GVDC of A0 Lamb waves in beam is first accurately estimated using FSWT, and then the arrival times of reflection wave from the crack for some individual frequency components are determined. An average operation on the calculated propagation distance is then performed to further improve the accuracy of damage localization.

  6. New polymorphic tetranucleotide microsatellites improve scoring accuracy in the bottlenose dolphin Tursiops aduncus

    NARCIS (Netherlands)

    Nater, Alexander; Kopps, Anna M.; Kruetzen, Michael

    2009-01-01

    We isolated and characterized 19 novel tetranucleotide microsatellite markers in the Indo-Pacific bottlenose dolphin (Tursiops aduncus) in order to improve genotyping accuracy in applications like large-scale population-wide paternity and relatedness assessments. One hundred T. aduncus from Shark Ba

  7. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    Science.gov (United States)

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a number line.…

  8. Improving the accuracy and reliability of remote system-calibration-free eye-gaze tracking.

    Science.gov (United States)

    Hennessey, Craig A; Lawrence, Peter D

    2009-07-01

    Remote eye-gaze tracking provides a means for nonintrusive tracking of the point-of-gaze (POG) of a user. For application as a user interface for the disabled, a remote system that is noncontact, reliable, and permits head motion is very desirable. The system-calibration-free pupil-corneal reflection (P-CR) vector technique for POG estimation is a popular method due to its simplicity, however, accuracy has been shown to be degraded with head displacement. Model-based POG-estimation methods were developed, which improve system accuracy during head displacement, however, these methods require complex system calibration in addition to user calibration. In this paper, the use of multiple corneal reflections and point-pattern matching allows for a scaling correction of the P-CR vector for head displacements as well as an improvement in system robustness to corneal reflection distortion, leading to improved POG-estimation accuracy. To demonstrate the improvement in performance, the enhanced multiple corneal reflection P-CR method is compared to the monocular and binocular accuracy of the traditional single corneal reflection P-CR method, and a model-based method of POG estimation for various head displacements. PMID:19272975

  9. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    Science.gov (United States)

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  10. Operational amplifier speed and accuracy improvement analog circuit design with structural methodology

    CERN Document Server

    Ivanov, Vadim V

    2004-01-01

    Operational Amplifier Speed and Accuracy Improvement proposes a new methodology for the design of analog integrated circuits. The usefulness of this methodology is demonstrated through the design of an operational amplifier. This methodology consists of the following iterative steps: description of the circuit functionality at a high level of abstraction using signal flow graphs; equivalent transformations and modifications of the graph to the form where all important parameters are controlled by dedicated feedback loops; and implementation of the structure using a library of elementary cells. Operational Amplifier Speed and Accuracy Improvement shows how to choose structures and design circuits which improve an operational amplifier's important parameters such as speed to power ratio, open loop gain, common-mode voltage rejection ratio, and power supply rejection ratio. The same approach is used to design clamps and limiting circuits which improve the performance of the amplifier outside of its linear operat...

  11. Evaluation of the diagnostic accuracy of a new dengue IgA capture assay (Platelia Dengue IgA Capture, Bio-Rad for dengue infection detection.

    Directory of Open Access Journals (Sweden)

    Sophie De Decker

    2015-03-01

    Full Text Available Considering the short lifetime of IgA antibodies in serum and the key advantages of antibody detection ELISAs in terms of sensitivity and specificity, Bio-Rad has just developed a new ELISA test based on the detection of specific anti-dengue IgA. This study has been carried out to assess the performance of this Platelia Dengue IgA Capture assay for dengue infection detection. A total of 184 well-characterized samples provided by the French Guiana NRC sera collection (Laboratory of Virology, Institut Pasteur in French Guiana were selected among samples collected between 2002 and 2013 from patients exhibiting a dengue-like syndrome. A first group included 134 sera from confirmed dengue-infected patients, and a second included 50 sera from non-dengue infected patients, all collected between day 3 and day 15 after the onset of fever. Dengue infection diagnoses were all confirmed using reference assays by direct virological identification using RT-PCR or virus culture on acute sera samples or on paired acute-phase sera samples of selected convalescent sera. This study revealed: i a good overall sensitivity and specificity of the IgA index test, i.e., 93% and 88% respectively, indicating its good correlation to acute dengue diagnosis; and ii a good concordance with the Panbio IgM capture ELISA. Because of the shorter persistence of dengue virus-specific IgA than IgM, these results underlined the relevance of this new test, which could significantly improve dengue diagnosis accuracy, especially in countries where dengue virus is (hyper- endemic. It would allow for additional refinement of dengue diagnostic strategy.

  12. Evaluation of the diagnostic accuracy of a new dengue IgA capture assay (Platelia Dengue IgA Capture, Bio-Rad) for dengue infection detection.

    Science.gov (United States)

    De Decker, Sophie; Vray, Muriel; Sistek, Viridiana; Labeau, Bhety; Enfissi, Antoine; Rousset, Dominique; Matheus, Séverine

    2015-03-01

    Considering the short lifetime of IgA antibodies in serum and the key advantages of antibody detection ELISAs in terms of sensitivity and specificity, Bio-Rad has just developed a new ELISA test based on the detection of specific anti-dengue IgA. This study has been carried out to assess the performance of this Platelia Dengue IgA Capture assay for dengue infection detection. A total of 184 well-characterized samples provided by the French Guiana NRC sera collection (Laboratory of Virology, Institut Pasteur in French Guiana) were selected among samples collected between 2002 and 2013 from patients exhibiting a dengue-like syndrome. A first group included 134 sera from confirmed dengue-infected patients, and a second included 50 sera from non-dengue infected patients, all collected between day 3 and day 15 after the onset of fever. Dengue infection diagnoses were all confirmed using reference assays by direct virological identification using RT-PCR or virus culture on acute sera samples or on paired acute-phase sera samples of selected convalescent sera. This study revealed: i) a good overall sensitivity and specificity of the IgA index test, i.e., 93% and 88% respectively, indicating its good correlation to acute dengue diagnosis; and ii) a good concordance with the Panbio IgM capture ELISA. Because of the shorter persistence of dengue virus-specific IgA than IgM, these results underlined the relevance of this new test, which could significantly improve dengue diagnosis accuracy, especially in countries where dengue virus is (hyper-) endemic. It would allow for additional refinement of dengue diagnostic strategy.

  13. How to Improve Reading Accuracy by Strategic Leading-in and Guidance

    Institute of Scientific and Technical Information of China (English)

    邝艳平

    2014-01-01

    The present study presents a detailed report of a project implemented to solve the problem that most of my students’ reading comprehension accuracy is low. It is hypothesized that learners' reading comprehension accuracy can be improved by strategic leading-in and guidance. This hypothesis is verified by a three-week classroom teaching of strategic leading-in and guidance in pre-reading stage. Among the methods of scientific investigation used are analytic method, cause analysis, questionnaire and brainstorming activa-tion.

  14. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    Science.gov (United States)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  15. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    Science.gov (United States)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  16. A technique for improving the relative accuracy of JET ECE temperature profiles

    International Nuclear Information System (INIS)

    A method for substantially improving the relative accuracy of the electron temperature profiles measured on JET using ECE [electron cyclotron emission] is presented. The improvement is obtained by reducing the relative systematic error in the spectral calibration of the Michelson interferometer which is used to measure the ECE spectrum. The errors in the calibration are evaluated by analysing the development of the measured emission spectrum from a plasma with a fixed-shape temperature profile as the toroidal field is varied. The effectiveness of the technique is investigated both theoretically and by analysis of real data. Examples are presented which illustrate the substantial improvements which can be obtained. (author)

  17. Improved microbial screning assay for the detection of quinolone residues in poultry and eggs

    NARCIS (Netherlands)

    Pikkemaat, M.G.; Mulder, P.P.J.; Elferink, J.W.A.; Cocq, A.; Nielen, M.W.F.; Egmond, van H.J.

    2007-01-01

    An improved microbiological screening assay is reported for the detection of quinolone residues in poultry muscle and eggs. The method was validated using fortified tissue samples and is the first microbial assay to effectively detect enrofloxacin, difloxacin, danofloxacin, as well as flumequine and

  18. Improve the ZY-3 Height Accuracy Using Icesat/glas Laser Altimeter Data

    Science.gov (United States)

    Li, Guoyuan; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-06-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What's more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  19. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    Directory of Open Access Journals (Sweden)

    J.-K. Lee

    2015-04-01

    Full Text Available There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR calculation system operated by the Korea Meteorological Administration (KMA. For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC method and the Local Gauge Correction method (LGC, to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.

  20. An improved haemolytic plaque assay for the detection of cells secreting antibody to bacterial antigens

    DEFF Research Database (Denmark)

    Barington, T; Heilmann, C

    1992-01-01

    Recent advances in the development of conjugate polysaccharide vaccines for human use have stimulated interest in the use of assays detecting antibody-secreting cells (AbSC) with specificity for bacterial antigens. Here we present improved haemolytic plaque-forming cell (PFC) assays detecting Ab......SC with specificity for tetanus and diphtheria toxoid as well as for Haemophilus influenzae type b and pneumococcal capsular polysaccharides. These assays were found to be less time consuming, more economical and yielded 1.9-3.4-fold higher plaque numbers than traditional Jerne-type PFC assays. In the case of anti...

  1. BMVC test, an improved fluorescence assay for detection of malignant pleural effusions

    International Nuclear Information System (INIS)

    The diagnosis of malignant pleural effusions is an important issue in the management of malignancy patients. Generally, cytologic examination is a routine diagnostic technique. However, morphological interpretation of cytology is sometimes inconclusive. Here an ancillary method named BMVC test is developed for rapid detection of malignant pleural effusion to improve the diagnostic accuracy at low cost. A simple assay kit is designed to collect living cells from clinical pleural effusion and a fluorescence probe, 3,6-Bis(1-methyl-4-vinylpyridinium) carbazole diiodide (BMVC), is used to illuminate malignant cells. The fluorescence intensity is quantitatively analyzed by ImageJ program. This method yields digital numbers for the test results without any grey zone or ambiguities in the current cytology tests due to intra-observer and inter-observer variability. Comparing with results from double-blind cytologic examination, this simple test gives a good discrimination between malignant and benign specimens with sensitivity of 89.4% (42/47) and specificity of 93.3% (56/60) for diagnosis of malignant pleural effusion. BMVC test provides accurate results in a short time period, and the digital output could assist cytologic examination to become more objective and clear-cut. This is a convenient ancillary tool for detection of malignant pleural effusions

  2. Improving the accuracy of gene expression profile classification with Lorenz curves and Gini ratios.

    Science.gov (United States)

    Tran, Quoc-Nam

    2011-01-01

    Microarrays are a new technology with great potential to provide accurate medical diagnostics, help to find the right treatment for many diseases such as cancers, and provide a detailed genome-wide molecular portrait of cellular states. In this chapter, we show how Lorenz Curves and Gini Ratios can be modified to improve the accuracy of gene expression profile classification. Experimental results with different classification algorithms using additional techniques and strategies for improving the accuracy such as the principal component analysis, the correlation-based feature subset selection, and the consistency subset evaluation technique for the task of classifying lung adenocarcinomas from gene expression show that our method find more optimal genes than SAM. PMID:21431549

  3. Improving the accuracy of heart disease diagnosis with an augmented back propagation algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease,rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis.A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.

  4. Using Natural Language Processing to Improve Accuracy of Automated Notifiable Disease Reporting

    OpenAIRE

    Friedlin, Jeff; Grannis, Shaun; Overhage, J Marc

    2008-01-01

    We examined whether using a natural language processing (NLP) system results in improved accuracy and completeness of automated electronic laboratory reporting (ELR) of notifiable conditions. We used data from a community-wide health information exchange that has automated ELR functionality. We focused on methicillin-resistant Staphylococcus Aureus (MRSA), a reportable infection found in unstructured, free-text culture result reports. We used the Regenstrief EXtraction tool (REX) for this wor...

  5. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    OpenAIRE

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of ...

  6. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    OpenAIRE

    Qingzhou Mao; Liang Zhang; Qingquan Li; Qingwu Hu; Jianwei Yu; Shaojun Feng; Washington Ochieng; Hanlu Gong

    2015-01-01

    In environments that are hostile to Global Navigation Satellites Systems (GNSS), the precision achieved by a mobile light detection and ranging (LiDAR) system (MLS) can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS). This paper proposes a novel least squares collocation (LSC)-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, ...

  7. Improving dimensional accuracy by error modelling and its compensation for 3-axis Vertical Machining Centre

    OpenAIRE

    Dobariya, H. M.; Patel, Y D; Jani, D. A.

    2015-01-01

    In today’s era, machining centres are very important units of manufacturing systems. Due to the structural characteristics, inaccuracy of the tool tip position is inherent. This could be a result of geometric error, thermal error, fixture dependant error and cutting force induced error. The geometric error contributes 70% of the total errors related to a machine tool. Present work focuses on improving dimensional accuracy of a 3-axis vertical machining centre (VMC). Accurate e...

  8. A method for improving the accuracy of automatic indexing of Chinese-English mixed documents

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHAO; Hui; SHI

    2012-01-01

    Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.

  9. Improving the accuracy of image-based forest fire recognition and spatial positioning

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Forest fires are frequent natural disasters.It is necessary to explore advanced means to monitor,recognize and locate forest fires so as to establish a scientific system for the early detection,real-time positioning and quick fighting of forest fires.This paper mainly expounds methods and algorithms for improving accuracy and removing uncertainty in image-based forest fire recognition and spatial positioning.Firstly,we discuss a method of forest fire recognition in visible-light imagery.There are four aspects to improve accuracy and remove uncertainty in fire recognition:(1)eliminating factors of interference such as road and sky with high brightness,red leaves,other colored objects and objects that are lit up at night,(2)excluding imaging for specific periods and azimuth angles for which interference phenomena repeatedly occur,(3)improving the thresholding method for determining the flame border in image processing by adjusting the threshold to the season,weather and region,and (4)integrating the visible-light image method with infrared image technology.Secondly,we examine infrared-image-based methods and approaches of improving the accuracy of forest fire recognition by combining the spectrum threshold with an object feature value such as the normalized difference vegetation index and excluding factors of disturbance such as interference signals,extreme weather and high-temperature animals.Thirdly,a method of visible analysis to enhance the accuracy of forest fire positioning is examined and realized;the method includes decreasing the visual angle,selecting central points,selecting the largest spots,and judging the selection of fire spots according to the central distance.Case studies are examined and the results are found to be satisfactory.

  10. Classification of features selected through Optimum Index Factor (OIF)for improving classification accuracy

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kaushal

    2011-01-01

    The present investigation was performed to determine if the features selected through Optimum Index Factor (OIF) could provide improved classification accuracy of the various categories on the satellite images of the individual years as well as stacked images of two different years as compared to all the features considered together. Further, in order to determine if there occurs increase in the classification accuracy of the different categories with corresponding increase in the OIF values of the features extracted from both the individual years' and stacked images, we performed linear regression between the producer's accuracy (PA) of the various categories with the OIF values of the different combinations of the features. The investigations demonstrated that there occurs significant improvement in the PA of two impervious categories viz. moderate built-up and low density built-up determined from the classification of the bands and principal components associated with the highest OIF value as compared to all the bands and principal components for both the individual years' and stacked images respectively. Regression analyses exhibited positive trends between the regression coefficients and OIF values forthe various categories determined for the individual years' and stacked images respectively signifying the prevalence of direct relationship between the increase in the information content with corresponding increase in the OIF values. The research proved that features extracted through OIF from both the individual years' and stacked images are capable of providing significantly improved PA as compared to all the features pooled together.

  11. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  12. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  13. Improved spatial accuracy of functional maps in the rat olfactory bulb using supervised machine learning approach.

    Science.gov (United States)

    Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro

    2016-08-15

    Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. PMID:27236085

  14. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  15. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Directory of Open Access Journals (Sweden)

    Nathalie Stroeymeyt

    Full Text Available BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. METHODOLOGY/PRINCIPAL FINDINGS: Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. CONCLUSIONS/SIGNIFICANCE: These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade

  16. International proficiency study of a consensus L1 PCR assay for the detection and typing of human papillomavirus DNA: evaluation of accuracy and intralaboratory and interlaboratory agreement.

    Science.gov (United States)

    Kornegay, Janet R; Roger, Michel; Davies, Philip O; Shepard, Amanda P; Guerrero, Nayana A; Lloveras, Belen; Evans, Darren; Coutlée, François

    2003-03-01

    The PGMY L1 consensus primer pair combined with the line blot assay allows the detection of 27 genital human papillomavirus (HPV) genotypes. We conducted an intralaboratory and interlaboratory agreement study to assess the accuracy and reproducibility of PCR for HPV DNA detection and typing using the PGMY primers and typing amplicons with the line blot (PGMY-LB) assay. A test panel of 109 samples consisting of 29 HPV-negative (10 buffer controls and 19 genital samples) and 80 HPV-positive samples (60 genital samples and 20 controls with small or large amounts of HPV DNA plasmids) were tested blindly in triplicate by three laboratories. Intralaboratory agreement ranged from 86 to 98% for HPV DNA detection. PGMY-LB assay results for samples with a low copy number of HPV DNA were less reproducible. The rate of intralaboratory agreement excluding negative results for HPV typing ranged from 78 to 96%. Interlaboratory reliability for HPV DNA positivity and HPV typing was very good, with levels of agreement of >95% and kappa values of >0.87. Again, low-copy-number samples were more prone to generating discrepant results. The accuracy varied from 91 to 100% for HPV DNA positivity and from 90 to 100% for HPV typing. HPV testing can thus be accomplished reliably with PCR by using a standardized written protocol and quality-controlled reagents. The use of validated HPV DNA detection and typing assays demonstrating excellent interlaboratory agreement will allow investigators to better compare results between epidemiological studies. PMID:12624033

  17. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Riedl, Christopher C. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Medical University of Vienna, Department of Radiology, Vienna (Austria); Ishill, Nicole M.; Moskowitz, Chaya S. [Memorial Sloan-Kettering Cancer Center, Epidemiology and Biostatistics, New York, NY (United States)

    2010-04-15

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  18. A computational approach for prediction of donor splice sites with improved accuracy.

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-01

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911

  19. Adaption of the Cytokinesis-Block Micronucleus Cytome Assay for Improved Triage Biodosimetry.

    Science.gov (United States)

    Beinke, C; Port, M; Riecke, A; Ruf, C G; Abend, M

    2016-05-01

    The purpose of this work was to adapt a more advanced form of the cytokinesis-block micronucleus (CBMN) cytome assay for triage biodosimetry in the event of a mass casualty radiation incident. We modified scoring procedures for the CBMN cytome assay to optimize field deployability, dose range, accuracy, speed, economy, simplicity and stability. Peripheral blood of 20 donors was irradiated in vitro (0-6 Gy X ray, maximum photon energy 240 keV) and processed for CBMN. Initially, we assessed two manual scoring strategies for accuracy: 1. Conventional scoring, comprised of micronucleus (MN) frequency per 1,000 binucleated (BN) cells (MN/1,000 BN cells); and 2. Evaluation of 1,000, 2,000 and 3,000 cells in total and different cellular subsets based on MN formation and proliferation (e.g., BN cells with and without MN, mononucleated cells). We used linear and logistic regression models to identify the cellular subsets related closest to dose with the best discrimination ability among different doses/dose categories. We validated the most promising subsets and their combinations with 16 blind samples covering a dose range of 0-8.3 Gy. Linear dose-response relationships comparable to the conventional CBMN assay (r(2) = 0.86) were found for BN cells with MN (r(2) = 0.84) and BN cells without MN (r(2) = 0.84). Models of combined cell counts (CCC) of BN cells with and without MN (BN(+MN) and BN(-MN)) with mononucleated cells (Mono) improved this relationship (r(2) = 0.92). Conventional CBMN discriminated dose categories up to 3 Gy with a concordance between 0.96-1.0 upon scoring 1,000 total cells. In 1,000 BN cells, concordances were observed for conventional CBMN up to 4 Gy as well as BN(+MN) or BN(-MN) (about 0.85). At doses of 4-6 Gy, the concordance of conventional CBMN, BN(+MN) and BN(-MN) declined (about 0.55). We found about 20% higher concordance and more precise dose estimates of irradiated and blinded samples for CCC (Mono + BN(+MN)) after scoring 1,000 total cells

  20. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    Directory of Open Access Journals (Sweden)

    J.-K. Lee

    2015-11-01

    Full Text Available There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR calculation system operated by the Korea Meteorological Administration (KMA. In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC method and the Local Gauge Correction method (LGC, to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  1. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  2. Error analysis to improve the speech recognition accuracy on Telugu language

    Indian Academy of Sciences (India)

    N Usha Rani; P N Girija

    2012-12-01

    Speech is one of the most important communication channels among the people. Speech Recognition occupies a prominent place in communication between the humans and machine. Several factors affect the accuracy of the speech recognition system. Much effort was involved to increase the accuracy of the speech recognition system, still erroneous output is generating in current speech recognition systems. Telugu language is one of the most widely spoken south Indian languages. In the proposed Telugu speech recognition system, errors obtained from decoder are analysed to improve the performance of the speech recognition system. Static pronunciation dictionary plays a key role in the speech recognition accuracy. Modification should be performed in the dictionary, which is used in the decoder of the speech recognition system. This modification reduces the number of the confusion pairs which improves the performance of the speech recognition system. Language model scores are also varied with this modification. Hit rate is considerably increased during this modification and false alarms have been changing during the modification of the pronunciation dictionary. Variations are observed in different error measures such as F-measures, error-rate and Word Error Rate (WER) by application of the proposed method.

  3. A Hybrid Method to Improve Forecasting Accuracy in the Case of Sanitary Materials Data

    Directory of Open Access Journals (Sweden)

    Daisuke Takeyasu

    2014-06-01

    Full Text Available Sales forecasting is a starting point of supply chain management, and its accuracy influences business management significantly. In industries, how to improve forecasting accuracy such as sales, shipping is an important issue. In this paper, a hybrid method is introduced and plural methods are compared. Focusing that the equation of exponential smoothing method(ESM is equivalent to (1,1 order ARMA model equation, a new method of estimation of smoothing constant in exponential smoothing method is proposed before by Takeyasu et.al. which satisfies minimum variance of forecasting error. Firstly, we make estimation of ARMA model parameter and then estimate smoothing constants. In this paper, combining the trend removing method with this method, we aim to improve forecasting accuracy. Trend removing by the combination of linear and 2nd order non-linear function and 3rd order non-linear function is carried out to the manufacturer’s data of sanitary materials. The new method shows that it is useful for the time series that has various trend characteristics and has rather strong seasonal trend. The effectiveness of this method should be examined in various cases.

  4. Extended cross-section adjustment method to improve the prediction accuracy of core parameters

    International Nuclear Information System (INIS)

    An extended cross-section adjustment method has been developed to improve the prediction accuracy of target core parameters. The present method is on the basis of a cross-section adjustment method which minimizes the uncertainties of target core parameters under the conditions that integral experimental data are given. The present method enables us to enhance the prediction accuracy better than the conventional cross-section adjustment method by taking into account the target core parameters, as well as the extended bias factor method. In addition, it is proved that the present method is equivalent to the extended bias factor method when only one target core parameter is taken into account. The present method is implemented in an existing cross-section adjustment solver. Numerical calculations verify the derived formulation and demonstrate an applicability of an adjusted cross-section set which is specialized for the target core parameters. (author)

  5. Quality systems for radiotherapy: Impact by a central authority for improved accuracy, safety and accident prevention

    International Nuclear Information System (INIS)

    High accuracy in radiotherapy is required for the good outcome of the treatments, which in turn implies the need to develop comprehensive Quality Systems for the operation of the clinic. The legal requirements as well as the recommendation by professional societies support this modern approach for improved accuracy, safety and accident prevention. The actions of a national radiation protection authority can play an important role in this development. In this paper, the actions of the authority in Finland (STUK) for the control of the implementation of the new requirements are reviewed. It is concluded that the role of the authorities should not be limited to simple control actions, but comprehensive practical support for the development of the Quality Systems should be provided. (author)

  6. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  7. Lens distortion elimination for improving measurement accuracy of fringe projection profilometry

    Science.gov (United States)

    Li, Kai; Bu, Jingjie; Zhang, Dongsheng

    2016-10-01

    Fringe projection profilometry (FPP) is a powerful method for three-dimensional (3D) shape measurement. However, the measurement accuracy of the existing FPP is often hindered by the distortion of the lens used in FPP. In this paper, a simple and efficient method is presented to overcome this problem. First, the FPP system is calibrated as a stereovision system. Then, the camera lens distortion is eliminated by correcting the captured images. For the projector lens distortion, distorted fringe patterns are generated according to the lens distortion model. With these distorted fringe patterns, the projector can project undistorted fringe patterns, which means that the projector lens distortion is eliminated. Experimental results show that the proposed method can successfully eliminate the lens distortions of FPP and therefore improves its measurement accuracy.

  8. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    CERN Document Server

    Leimkuhler, Benedict

    2016-01-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nos\\'{e}--Hoover--Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees--Edwards boundary conditions to...

  9. [Improving the Care Accuracy of Percutaneously Inserted Central Catheters Using Objective Structured Clinical Examination].

    Science.gov (United States)

    Yang, Pei-Hsin; Hsu, Hsin-Chieh; Chiang, Chia-Chin; Tseng, Yun-Shan

    2016-06-01

    Approximately 9,800 adverse events related to medical tubing are reported in Taiwan every year. Most neonates in critical condition and premature infants acquire fluid, nutrition, and infusion solution using percutaneously inserted central catheters (PICCs). Objective structured clinical examination (OSCE) is an objective evaluative tool that may be used to measure the clinical competence of healthcare professionals. Very little is known about the effects of OSCE in Taiwan in terms of improving the accuracy of use of PICCs in nursing care and of reducing unexpected medical tubing removals. The present project aimed to explore the effects of an OSCE course on these two issues in the realms of standard operating procedures, care protocols, and training equipment at a neonatal intermediate unit in Taiwan. The duration of the present study ran from 2/20/2013 to 10/30/2013. The results showed that nurses' knowledge of PICCs improved from 87% to 91.5%; nurses' skill-care accuracy related to PICCs improved from 59.1% to 97.3%; and incidents of unexpected tube removals declined from 63.6% to 16.7%. This project demonstrated that OSCE courses improve the quality of PICC nursing care. Additionally, the instant feedback mechanism within the OSCE course benefited both teachers and students. PMID:27250965

  10. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  11. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    DEFF Research Database (Denmark)

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E;

    2014-01-01

    ) to 2.1 (Eastern Europe and Russia). After adjustment for the SCORE risk, the association remained statistically significant overall, in the UK and Eastern Europe and Russia. Education significantly improved discrimination in all European regions and classification in Nordic countries (clinical NRI=5.......3%) and in Eastern Europe and Russia (NRI=24.7%). In women, after SCORE risk adjustment, the association was not statistically significant, but the reduced number of deaths plays a major role, and the addition of education led to improvements in discrimination and classification in the Nordic countries only......OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  12. Intra- and inter-laboratory reproducibility and accuracy of the LuSens assay: A reporter gene-cell line to detect keratinocyte activation by skin sensitizers.

    Science.gov (United States)

    Ramirez, Tzutzuy; Stein, Nadine; Aumann, Alexandra; Remus, Tina; Edwards, Amber; Norman, Kimberly G; Ryan, Cindy; Bader, Jackie E; Fehr, Markus; Burleson, Florence; Foertsch, Leslie; Wang, Xiaohong; Gerberick, Frank; Beilstein, Paul; Hoffmann, Sebastian; Mehling, Annette; van Ravenzwaay, Bennard; Landsiedel, Robert

    2016-04-01

    Several non-animal methods are now available to address the key events leading to skin sensitization as defined by the adverse outcome pathway. The KeratinoSens assay addresses the cellular event of keratinocyte activation and is a method accepted under OECD TG 442D. In this study, the results of an inter-laboratory evaluation of the "me-too" LuSens assay, a bioassay that uses a human keratinocyte cell line harboring a reporter gene construct composed of the rat antioxidant response element (ARE) of the NADPH:quinone oxidoreductase 1 gene and the luciferase gene, are described. Earlier in-house validation with 74 substances showed an accuracy of 82% in comparison to human data. When used in a battery of non-animal methods, even higher predictivity is achieved. To meet European validation criteria, a multicenter study was conducted in 5 laboratories. The study was divided into two phases, to assess 1) transferability of the method, and 2) reproducibility and accuracy. Phase I was performed by testing 8 non-coded test substances; the results showed a good transferability to naïve laboratories even without on-site training. Phase II was performed with 20 coded test substances (performance standards recommended by OECD, 2015). In this phase, the intra- and inter-laboratory reproducibility as well as accuracy of the method was evaluated. The data demonstrate a remarkable reproducibility of 100% and an accuracy of over 80% in identifying skin sensitizers, indicating a good concordance with in vivo data. These results demonstrate good transferability, reliability and accuracy of the method thereby achieving the standards necessary for use in a regulatory setting to detect skin sensitizers. PMID:26796489

  13. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  14. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  15. Does PACS improve diagnostic accuracy in chest radiograph interpretations in clinical practice?

    International Nuclear Information System (INIS)

    Objectives: To assess the impact of a Picture Archiving and Communication System (PACS) on the diagnostic accuracy of the interpretation of chest radiology examinations in a “real life” radiology setting. Materials and methods: During a period before PACS was introduced to radiologists, when images were still interpreted on film and reported on paper, images and reports were also digitally stored in an image database. The same database was used after the PACS introduction. This provided a unique opportunity to conduct a blinded retrospective study, comparing sensitivity (the main outcome parameter) in the pre and post-PACS periods. We selected 56 digitally stored chest radiograph examinations that were originally read and reported on film, and 66 examinations that were read and reported on screen 2 years after the PACS introduction. Each examination was assigned a random number, and both reports and images were scored independently for pathological findings. The blinded retrospective score for the original reports were then compared with the score for the images (the gold standard). Results: Sensitivity was improved after the PACS introduction. When both certain and uncertain findings were included, this improvement was statistically significant. There were no other statistically significant changes. Conclusion: The result is consistent with prospective studies concluding that diagnostic accuracy is at least not reduced after PACS introduction. The sensitivity may even be improved.

  16. 3D multicolor super-resolution imaging offers improved accuracy in neuron tracing.

    Directory of Open Access Journals (Sweden)

    Melike Lakadamyali

    Full Text Available The connectivity among neurons holds the key to understanding brain function. Mapping neural connectivity in brain circuits requires imaging techniques with high spatial resolution to facilitate neuron tracing and high molecular specificity to mark different cellular and molecular populations. Here, we tested a three-dimensional (3D, multicolor super-resolution imaging method, stochastic optical reconstruction microscopy (STORM, for tracing neural connectivity using cultured hippocampal neurons obtained from wild-type neonatal rat embryos as a model system. Using a membrane specific labeling approach that improves labeling density compared to cytoplasmic labeling, we imaged neural processes at 44 nm 2D and 116 nm 3D resolution as determined by considering both the localization precision of the fluorescent probes and the Nyquist criterion based on label density. Comparison with confocal images showed that, with the currently achieved resolution, we could distinguish and trace substantially more neuronal processes in the super-resolution images. The accuracy of tracing was further improved by using multicolor super-resolution imaging. The resolution obtained here was largely limited by the label density and not by the localization precision of the fluorescent probes. Therefore, higher image resolution, and thus higher tracing accuracy, can in principle be achieved by further improving the label density.

  17. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  18. Accounting for filter bandwidth improves the quantitative accuracy of bioluminescence tomography

    Science.gov (United States)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie L.; Cobbold, Mark; Dehghani, Hamid

    2015-09-01

    Bioluminescence imaging is a noninvasive technique whereby surface weighted images of luminescent probes within animals are used to characterize cell count and function. Traditionally, data are collected over the entire emission spectrum of the source using no filters and are used to evaluate cell count/function over the entire spectrum. Alternatively, multispectral data over several wavelengths can be incorporated to perform tomographic reconstruction of source location and intensity. However, bandpass filters used for multispectral data acquisition have a specific bandwidth, which is ignored in the reconstruction. In this work, ignoring the bandwidth is shown to introduce a dependence of the recovered source intensity on the bandwidth of the filters. A method of accounting for the bandwidth of filters used during multispectral data acquisition is presented and its efficacy in increasing the quantitative accuracy of bioluminescence tomography is demonstrated through simulation and experiment. It is demonstrated that while using filters with a large bandwidth can dramatically decrease the data acquisition time, if not accounted for, errors of up to 200% in quantitative accuracy are introduced in two-dimensional planar imaging, even after normalization. For tomographic imaging, the use of this method to account for filter bandwidth dramatically improves the quantitative accuracy.

  19. Improving the accuracy of GRACE Earth's gravitational field using the combination of different inclinations

    Institute of Scientific and Technical Information of China (English)

    Wei Zheng; Chenggang Shao; Jun Luo; Houze Xu

    2008-01-01

    In this paper,the GRACE Earth's gravitational field complete up to degree and order 120 is recovered based on the combination of different inclinations using the energy conservation principle.The results show that because different inclinations of satellite are sensitive to the geopotential coefficients with different degrees/and orders m.the design of GRACE exploiting 89° inclination can effectively improve the accuracy of geopotential zonal harmonic coefficients.However,it is less sensitive to the geopotential tesseral harmonic coefficients.Accordingly.the second group of GRACE exploiting lower inclination is required to determine high-accurately the geopotential tesseral harmonic coefficients and cover the shortage of the single group of GRACE exploiting 89° inclination.Two groups of GRACE individually exploiting 89°+(82°-84°)inclinations are the optimal combination of the Earth'S gravitational field recovery complete up to degree and order 120.In the degree 120,the joint accuracy of cumulative geoid height based on two groups of GRACE individually exploiting 89° and 83° inclinations is averagely two times higher than the accuracy of a group of GRACE exploiting 89° inclination.

  20. Integration of INS, GPS, Magnetometer and Barometer for Improving Accuracy Navigation of the Vehicle

    Directory of Open Access Journals (Sweden)

    Vlada Sokol Sokolovic

    2013-09-01

    Full Text Available This paper describes integrated navigation system that is based on a low cost inertial sensor, global positioning system (GPS receiver, magnetometer and a barometer, in order to improve accuracy of complete attitude and navigation solution. The main advantage of integration consists in availability of reliable navigation parameters during the intervals of absence of GPS data. The magnetometer and the barometer are applied for the attitude calibration and vertical channel stabilization, respectively. The acceptable accuracy of inertial navigation system (INS is achieved by the proper damping of INS errors. The integration is made by the implementation of an extended Kalman filter (EKF with control signal that is designed appropriate for low accuracy sensors noise characteristics. The analysis of integrated navigation system performances is made experimentally and the results show that integrated navigation system provides continuous and reliable navigation solutions.Defence Science Journal, 2013, 63(5, pp.451-455, DOI:http://dx.doi.org/10.14429/dsj.63.4534

  1. Improving the accuracy of multiple integral evaluation by applying Romberg's method

    Science.gov (United States)

    Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.

    2009-02-01

    Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.

  2. Method of Improving the Navigation Accuracy of SINS by Continuous Rotation

    Institute of Scientific and Technical Information of China (English)

    YANG Yong; MIAO Ling-juan; SHEN Jun

    2005-01-01

    A method of improving the navigation accuracy of strapdown inertial navigation system (SINS) is studied. The particular technique discussed involves the continuous rotation of gyros and accelerometers cluster about the vertical axis of the vehicle. Then the errors of these sensors will have periodic variation corresponding to components along the body frame. Under this condition, the modulated sensor errors produce reduced system errors. Theoretical analysis based on a new coordinate system defined as sensing frame and test results are presented, and they indicate the method attenuates the navigation errors brought by the gyros' random constant drift and the accelerometer's bias and their white noise compared to the conventional method.

  3. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  4. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    OpenAIRE

    Kerr, Deborah A; Wright, Janine L.; Dhaliwal, Satvinder S.; Boushey, Carol J

    2015-01-01

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; ...

  5. Improvement in the specificity of assays for detection of antibody to hepatitis B core antigen.

    OpenAIRE

    Weare, J A; Robertson, E F; Madsen, G; Hu, R; Decker, R H

    1991-01-01

    Reducing agents dramatically alter the specificity of competitive assays for antibody to hepatitis B core antigen (anti-HBc). A specificity improvement was demonstrated with a new assay which utilizes microparticle membrane capture and chemiluminescence detection as well as a current radioimmunoassay procedure (Corab: Abbott Laboratories, Abbott Park, Ill.). The effect was most noticeable with elevated negative and weakly reactive samples. In both systems, reductants increased separation of a...

  6. SU-E-J-133: Autosegmentation of Linac CBCT: Improved Accuracy Via Penalized Likelihood Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y [Elekta, Inc, Maryland Heights, MO (United States)

    2015-06-15

    Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone and with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that

  7. SU-E-J-133: Autosegmentation of Linac CBCT: Improved Accuracy Via Penalized Likelihood Reconstruction

    International Nuclear Information System (INIS)

    Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone and with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that

  8. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    Science.gov (United States)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  9. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  10. Case studies on forecasting for innovative technologies: frequent revisions improve accuracy.

    Science.gov (United States)

    Lerner, Jeffrey C; Robertson, Diane C; Goldstein, Sara M

    2015-02-01

    Health technology forecasting is designed to provide reliable predictions about costs, utilization, diffusion, and other market realities before the technologies enter routine clinical use. In this article we address three questions central to forecasting's usefulness: Are early forecasts sufficiently accurate to help providers acquire the most promising technology and payers to set effective coverage policies? What variables contribute to inaccurate forecasts? How can forecasters manage the variables to improve accuracy? We analyzed forecasts published between 2007 and 2010 by the ECRI Institute on four technologies: single-room proton beam radiation therapy for various cancers; digital breast tomosynthesis imaging technology for breast cancer screening; transcatheter aortic valve replacement for serious heart valve disease; and minimally invasive robot-assisted surgery for various cancers. We then examined revised ECRI forecasts published in 2013 (digital breast tomosynthesis) and 2014 (the other three topics) to identify inaccuracies in the earlier forecasts and explore why they occurred. We found that five of twenty early predictions were inaccurate when compared with the updated forecasts. The inaccuracies pertained to two technologies that had more time-sensitive variables to consider. The case studies suggest that frequent revision of forecasts could improve accuracy, especially for complex technologies whose eventual use is governed by multiple interactive factors. PMID:25646112

  11. Evaluating an educational intervention to improve the accuracy of death certification among trainees from various specialties

    Directory of Open Access Journals (Sweden)

    Villar Jesús

    2007-11-01

    Full Text Available Abstract Background The inaccuracy of death certification can lead to the misallocation of resources in health care programs and research. We evaluated the rate of errors in the completion of death certificates among medical residents from various specialties, before and after an educational intervention which was designed to improve the accuracy in the certification of the cause of death. Methods A 90-min seminar was delivered to seven mixed groups of medical trainees (n = 166 from several health care institutions in Spain. Physicians were asked to read and anonymously complete a same case-scenario of death certification before and after the seminar. We compared the rates of errors and the impact of the educational intervention before and after the seminar. Results A total of 332 death certificates (166 completed before and 166 completed after the intervention were audited. Death certificates were completed with errors by 71.1% of the physicians before the educational intervention. Following the seminar, the proportion of death certificates with errors decreased to 9% (p Conclusion Major errors in the completion of the correct cause of death on death certificates are common among medical residents. A simple educational intervention can dramatically improve the accuracy in the completion of death certificates by physicians.

  12. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Directory of Open Access Journals (Sweden)

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  13. Sharp Chandra View of ROSAT All-Sky Survey Bright Sources: I. Improvement of Positional Accuracy

    CERN Document Server

    Gao, Shuang; Liu, Jifeng

    2016-01-01

    The ROSAT All-Sky Survey (RASS) represents one of the most complete and sensitive soft X-ray all-sky surveys to date. However, the deficient positional accuracy of the RASS Bright Source Catalog (BSC) and subsequent lack of firm optical identifications affect the multi-wavelength studies of X-ray sources. The widely used positional errors $\\sigma_{pos}$ based on the Tycho Stars Catalog (Tycho-1) have previously been applied for identifying objects in the optical band. The considerably sharper Chandra view covers a fraction of RASS sources, whose $\\sigma_{pos}$ could be improved by utilizing the sub-arcsec positional accuracy of Chandra observations. We cross-match X-ray objects between the BSC and \\emph{Chandra} sources extracted from the Advanced CCD Imaging Spectrometer (ACIS) archival observations. A combined counterparts list (BSCxACIS) with \\emph{Chandra} spatial positions weighted by the X-ray flux of multi-counterparts is employed to evaluate and improve the former identifications of BSC with the other...

  14. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    Science.gov (United States)

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  15. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations.

    Science.gov (United States)

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-05-26

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices.

  16. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    Science.gov (United States)

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  17. Development of a method of ICP algorithm accuracy improvement during shaped profiles and surfaces control

    Directory of Open Access Journals (Sweden)

    V.A. Pechenin

    2014-10-01

    Full Text Available In this paper we propose a method of improvement of operating accuracy of iterative closest point algorithm used for metrology problems solving when determining a location deviation. Compressor blade profiles of a gas turbine engine (GTE were used as an object for application of the method of deviation determining. It is proposed to formulate the problem of the best alignment in the developed method as a multiobjective problem including criteria of minimum of squared distances, normal vectors differences and depth of camber differences at corresponding points of aligned profiles. Variants of resolving the task using an integral criterion including the above-mentioned were considered. Optimization problems were solved using a quasi- Newton method of sequential quadratic programming. The proposed new method of improvement of the registration algorithm based on geometric features showed greater accuracy in comparison with the discussed methods of optimization of a distance between fitting points, especially if a small quantity of measurement points on the profiles was used.

  18. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    Science.gov (United States)

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  19. DEVELOPING A PRICE-SENSITIVE RECOMMENDER SYSTEM TO IMPROVE ACCURACY AND BUSINESS PERFORMANCE OF ECOMMERCE APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Panniello Umberto

    2015-06-01

    Full Text Available Much work has been done on recommender systems (RS and much evidence was collected from applications about their effectiveness on business. As a consequence, the use of RS has quickly shifted from information retrieval to automatic marketing tools. The main aim of marketing tools is to positively affect customers’ purchasing decisions and we know through marketing literature that purchasing decisions are strongly influenced by price. However, few works have explored the issue of including price in a recommendation engine. In this paper, we want to describe the main issues of designing this type of price-sensitive recommendation engine. We want also to demonstrate what the effect is of this design on recommendations’ accuracy and on business performance. We demonstrate that including price in an RS improves the accuracy of recommendations, but it has to be properly modeled in order to also improve business performance. We have experimented with a Price-Sensitive RS in a laboratory setting and compared it to a traditional one by varying several settings.

  20. OCR Accuracy Improvement on Document Images Through a Novel Pre-Processing Approach

    Directory of Open Access Journals (Sweden)

    A. El Harraj

    2015-08-01

    Full Text Available Digital camera and mobile document image acquisition are new trends arising in the world of Optical Character Recognition and text detection. In some cases, such process integrates many distortions and produces poorly scanned text or text-photo images and natural images, leading to an unreliable OCR digitization. In this paper, we present a novel nonparametric and unsupervised method to compensate for undesirable document image distortions aiming to optimally improve OCR accuracy. Our approach relies on a very efficient stack of document image enhancing techniques to recover deformation of the entire document image. First, we propose a local brightness and contrast adjustment method to effectively handle lighting variations and the irregular distribution of image illumination. Second, we use an optimized greyscale conversion algorithm to transform our document image to greyscale level. Third, we sharpen the useful information in the resulting greyscale image using Un-sharp Masking method. Finally, an optimal global binarization approach is used to prepare the final document image to OCR recognition. The proposed approach can significantly improve text detection rate and optical character recognition accuracy. To demonstrate the efficiency of our approach, an exhaustive experimentation on a standard dataset is presented.

  1. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    Directory of Open Access Journals (Sweden)

    Donghwan Yoon

    2016-06-01

    Full Text Available The position accuracy of Global Navigation Satellite System (GNSS modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  2. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    Directory of Open Access Journals (Sweden)

    Oleksandr Makeyev

    2016-06-01

    Full Text Available Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1-polar electrode with n rings using the (4n + 1-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2 and quadripolar (n = 3 electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  3. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations

    Science.gov (United States)

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-01-01

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364

  4. Combining data fusion with multiresolution analysis for improving the classification accuracy of uterine EMG signals

    Science.gov (United States)

    Moslem, Bassam; Diab, Mohamad; Khalil, Mohamad; Marque, Catherine

    2012-12-01

    Multisensor data fusion is a powerful solution for solving difficult pattern recognition problems such as the classification of bioelectrical signals. It is the process of combining information from different sensors to provide a more stable and more robust classification decisions. We combine here data fusion with multiresolution analysis based on the wavelet packet transform (WPT) in order to classify real uterine electromyogram (EMG) signals recorded by 16 electrodes. Herein, the data fusion is done at the decision level by using a weighted majority voting (WMV) rule. On the other hand, the WPT is used to achieve significant enhancement in the classification performance of each channel by improving the discrimination power of the selected feature. We show that the proposed approach tested on our recorded data can improve the recognition accuracy in labor prediction and has a competitive and promising performance.

  5. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics.

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T; Henriksen, Niel M; Gilson, Michael K

    2015-08-13

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by nonoptimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery.

  6. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    Science.gov (United States)

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  7. Effectiveness of Practices for Improving the Diagnostic Accuracy of Non-ST Elevation Myocardial Infarction in the Emergency Department: A Laboratory Medicine Best Practices Systematic Review

    Science.gov (United States)

    Layfield, Christopher; Rose, John; Alford, Aaron; Snyder, Susan R.; Apple, Fred S.; Chowdhury, Farah M.; Kontos, Michael C.; Newby, L. Kristin; Storrow, Alan B.; Tanasijevic, Milenko; Leibach, Elizabeth; Liebow, Edward B.; Christenson, Robert H.

    2016-01-01

    Objectives This article presents evidence from a systematic review of the effectiveness of four practices (assay selection, decision point cardiac troponin (cTn) threshold selection, serial testing, and point of care testing) for improving the diagnostic accuracy for Non-ST-Segment Elevation Myocardial Infarction (NSTEMI) in the Emergency Department. Design and Methods The CDC-funded Laboratory Medicine Best Practices (LMBP™) Initiative systematic review A6 Method for Laboratory Best Practices was used. Results The current guidelines (e.g., ACC/AHA) recommend using cardiac troponin assays with a 99th percentile upper reference limit (URL) diagnostic threshold to diagnose NSTEMI. The evidence in this systematic review indicates that contemporary sensitive cTn assays meet the assay profile requirements (sensitivity, specificity, PPV, and NPV) to more accurately diagnose NSTEMI than alternate tests. Additional biomarkers did not increase diagnostic effectiveness of cTn assays. Sensitivity, specificity, and negative predictive value (NPV) were consistently high and low positive predictive value (PPV) improved with serial sampling. Evidence for use of cTn point of care testing (POCT) was insufficient to make recommendations, though some evidence suggests cTn POCT may result in reduction to patient length of stay and costs. Conclusions Two best practice recommendations emerged from the systematic review and meta-analysis of literature conducted using the LMBP™ A6 Method criteria: Testing with cardiac troponin assays, using the 99th percentile URL as the clinical diagnostic threshold for the diagnosis of NSTEMI and without additional biomarkers, is recommended. Also recommended is serial cardiac troponin sampling with one sample at presentation and at least one additional sample taken a minimum of 6 hours later to identify a rise or fall in the troponin level. Testing with high-sensitivity cardiac troponin assays, at presentation and again within 6 hours, is the

  8. Iterative metal artifact reduction improves dose calculation accuracy. Phantom study with dental implants

    Energy Technology Data Exchange (ETDEWEB)

    Maerz, Manuel; Mittermair, Pia; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany); Krauss, Andreas [Siemens Healthcare GmbH, Forchheim (Germany)

    2016-06-15

    Metallic dental implants cause severe streaking artifacts in computed tomography (CT) data, which affect the accuracy of dose calculations in radiation therapy. The aim of this study was to investigate the benefit of the metal artifact reduction algorithm iterative metal artifact reduction (iMAR) in terms of correct representation of Hounsfield units (HU) and dose calculation accuracy. Heterogeneous phantoms consisting of different types of tissue equivalent material surrounding metallic dental implants were designed. Artifact-containing CT data of the phantoms were corrected using iMAR. Corrected and uncorrected CT data were compared to synthetic CT data to evaluate accuracy of HU reproduction. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated in Oncentra v4.3 on corrected and uncorrected CT data and compared to Gafchromic trademark EBT3 films to assess accuracy of dose calculation. The use of iMAR increased the accuracy of HU reproduction. The average deviation of HU decreased from 1006 HU to 408 HU in areas including metal and from 283 HU to 33 HU in tissue areas excluding metal. Dose calculation accuracy could be significantly improved for all phantoms and plans: The mean passing rate for gamma evaluation with 3 % dose tolerance and 3 mm distance to agreement increased from 90.6 % to 96.2 % if artifacts were corrected by iMAR. The application of iMAR allows metal artifacts to be removed to a great extent which leads to a significant increase in dose calculation accuracy. (orig.) [German] Metallische Implantate verursachen streifenfoermige Artefakte in CT-Bildern, welche die Dosisberechnung beeinflussen. In dieser Studie soll der Nutzen des iterativen Metall-Artefakt-Reduktions-Algorithmus iMAR hinsichtlich der Wiedergabetreue von Hounsfield-Werten (HU) und der Genauigkeit von Dosisberechnungen untersucht werden. Es wurden heterogene Phantome aus verschiedenen Arten gewebeaequivalenten Materials mit

  9. Evaluation of an improved bioluminescence assay for the detection of bacteria in soy milk.

    Science.gov (United States)

    Shinozaki, Yohei; Sato, Jun; Igarashi, Toshinori; Suzuki, Shigeya; Nishimoto, Kazunori; Harada, Yasuhiro

    2013-01-01

    Because soy milk is nutrient rich and nearly neutral in pH, it favors the growth of microbial contaminants. To ensure that soy milk meets food-safety standards, it must be pasteurized and have its sterility confirmed. ATP bioluminescence assay has become a widely accepted means of detecting food microorganisms. However, the high background bioluminescence intensity of soy milk has rendered it unsuitable for ATP analysis. Here, we tested the efficacy of an improved pre-treated bioluminescence assay on soy milk. By comparing background bioluminescence intensities obtained by the conventional and improved methods, we demonstrated that our method significantly reduces soy milk background bioluminescence. The dose-response curve of the assay was tested with serial dilutions of Bacillus sp. culture. An extremely strong log-linear relation between the bioluminescence intensity relative light units and colony formation units CFU/ml emerged for the tested strain. The detection limit of the assay was estimated as 5.2×10(3) CFU/ml from the dose-response curve and an imposed signal limit was three times the background level. The results showed that contaminated samples could be easily detected within 24 h using our improved bioluminescence assay.

  10. Limits of diagnostic accuracy of anti-hepatitis C virus antibodies detection by ELISA and immunoblot assay.

    Science.gov (United States)

    Suslov, Anatoly P; Kuzin, Stanislav N; Golosova, Tatiana V; Shalunova, Nina V; Malyshev, Nikolai A; Sadikova, Natalia V; Vavilova, Lubov M; Somova, Anna V; Musina, Elena E; Ivanova, Maria V; Kipor, Tatiana T; Timonin, Igor M; Kuzina, Lubov E; Godkov, Mihail A; Bajenov, Alexei I; Nesterenko, Vladimir G

    2002-07-01

    When human sera samples are tested for anti-hepatitis C virus (HCV) antibodies using different ELISA kits as well as immunoblot assay kits discrepant results often occur. As a result the diagnostics of HCV infection in such sera remains unclear. The purpose of this investigation is to define the limits of HCV serodiagnostics. Overall 7 different test kits of domestic and foreign manufacturers were used for the sampled sera testing. Preliminary comparative study, using seroconversion panels PHV905, PHV907, PHV908 was performed and reference kit was chosen (Murex anti-HCV version 4) as the most sensitive kit on the base of this study results. Overall 1640 sera samples have been screened using different anti-HCV ELISA kits and 667 of them gave discrepant results in at least two kits. These sera were then tested using three anti-HCV ELISA kits (first set of 377 samples) or four anti-HCV ELISA kits (second set of 290 samples) at the conditions of reference laboratory. In the first set 17.2% samples remained discrepant and in the second set - 13.4%. "Discrepant" sera were further tested in RIBA 3.0 and INNO-LIA immunoblot confirmatory assays, but approximately 5-7% of them remained undetermined after all the tests. For the samples with signal-to-cutoff ratio higher than 3.0 high rate of result consistency by reference, ELISA routing and INNO-LIA immunoblot assay was observed. On the other hand the results of tests 27 "problematic" sera in RIBA 3.0 and INNO-LIA were consistent only in 55.5% cases. Analysis of the antigen spectrum reactive with antibodies in "problematic" sera, demonstrated predominance of Core, NS3 and NS4 antigens for sera, positive in RIBA 3.0 and Core and NS3 antigens for sera, positive in INNO-LIA. To overcome the problem of undetermined sera, methods based on other principles, as well as alternative criteria of HCV infection diagnostics are discussed.

  11. Hyaluronidase treatment of synovial fluid to improve assay precision for biomarker research using multiplex immunoassay platforms.

    Science.gov (United States)

    Jayadev, Chethan; Rout, Raj; Price, Andrew; Hulley, Philippa; Mahoney, David

    2012-12-14

    Synovial fluid (SF) is a difficult biological matrix to analyse due to its complex non-Newtonian nature. This can result in poor assay repeatability and potentially inefficient use of precious samples. This study assessed the impact of SF treatment by hyaluronidase and/or dilution on intra-assay precision using the Luminex and Meso Scale Discovery (MSD) multiplex platforms. SF was obtained from patients with knee osteoarthritis at the time of joint replacement surgery. Aliquots derived from the same sample were left untreated (neat), 2-fold diluted, 4-fold diluted or treated with 2mg/ml testicular hyaluronidase (with 2-fold dilution). Preparation methods were compared in a polysterene-bead Luminex 10-plex (N=16), magnetic-bead Luminex singleplex (N=7) and MSD 4-plex (N=7). Each method was assessed for coefficient of variation (CV) of replicate measurements, number of bead events (for Luminex assays) and dilution-adjusted analyte concentration. Percentage recovery was calculated for dilutions and HAse treatment. Hyaluronidase treatment significantly increased the number of wells with satisfactory bead events/region (95%) compared to neat (48%, pmagnetic-bead Luminex assay achieved ≥50 bead events irrespective of treatment method. Hyaluronidase treatment resulted in lower intra-assay CVs for detectable ligands (group average CVmagnetic-bead Luminex assays. In addition, measured sample concentrations were higher and recovery was poor (elevated) after hyaluronidase treatment. In the MSD 4-plex, within-group comparison of the intra-assay CV or concentration was not conclusively influenced by SF preparation. However, only hyaluronidase treatment resulted in CV<25% for all samples for TNF-α. There was no effect on analyte concentrations or recovery. Hyaluronidase treatment can improve intra-assay precision and assay signal of SF analysis by multiplex immunoassays and should be recommended for SF biomarker research, particularly using the Luminex platform. PMID:22955210

  12. Increasing cutaneous afferent feedback improves proprioceptive accuracy at the knee in patients with sensory ataxia.

    Science.gov (United States)

    Macefield, Vaughan G; Norcliffe-Kaufmann, Lucy; Goulding, Niamh; Palma, Jose-Alberto; Fuente Mora, Cristina; Kaufmann, Horacio

    2016-02-01

    Hereditary sensory and autonomic neuropathy type III (HSAN III) features disturbed proprioception and a marked ataxic gait. We recently showed that joint angle matching error at the knee is positively correlated with the degree of ataxia. Using intraneural microelectrodes, we also documented that these patients lack functional muscle spindle afferents but have preserved large-diameter cutaneous afferents, suggesting that patients with better proprioception may be relying more on proprioceptive cues provided by tactile afferents. We tested the hypothesis that enhancing cutaneous sensory feedback by stretching the skin at the knee joint using unidirectional elasticity tape could improve proprioceptive accuracy in patients with a congenital absence of functional muscle spindles. Passive joint angle matching at the knee was used to assess proprioceptive accuracy in 25 patients with HSAN III and 9 age-matched control subjects, with and without taping. Angles of the reference and indicator knees were recorded with digital inclinometers and the absolute error, gradient, and correlation coefficient between the two sides calculated. Patients with HSAN III performed poorly on the joint angle matching test [mean matching error 8.0 ± 0.8° (±SE); controls 3.0 ± 0.3°]. Following application of tape bilaterally to the knee in an X-shaped pattern, proprioceptive performance improved significantly in the patients (mean error 5.4 ± 0.7°) but not in the controls (3.0 ± 0.2°). Across patients, but not controls, significant increases in gradient and correlation coefficient were also apparent following taping. We conclude that taping improves proprioception at the knee in HSAN III, presumably via enhanced sensory feedback from the skin.

  13. SU-E-J-101: Improved CT to CBCT Deformable Registration Accuracy by Incorporating Multiple CBCTs

    International Nuclear Information System (INIS)

    Purpose: Combining prior day CBCT contours with STAPLE was previously shown to improve automated prostate contouring. These accurate STAPLE contours are now used to guide the planning CT to pre-treatment CBCT deformable registration. Methods: Six IGRT prostate patients with daily kilovoltage CBCT had their original planning CT and 9 CBCTs contoured by the same physician. These physician contours for the planning CT and each prior CBCT are deformed to match the current CBCT anatomy, producing multiple contour sets. These sets are then combined using STAPLE into one optimal set (e.g. for day 3 CBCT, combine contours produced using the plan plus day 1 and 2 CBCTs). STAPLE computes a probabilistic estimate of the true contour from this collection of contours by maximizing sensitivity and specificity. The deformation field from planning CT to CBCT registration is then refined by matching its deformed contours to the STAPLE contours. ADMIRE (Elekta Inc.) was used for this. The refinement does not force perfect agreement of the contours, typically Dice’s Coefficient (DC) of > 0.9 is obtained, and the image difference metric remains in the optimization of the deformable registration. Results: The average DC between physician delineated CBCT contours and deformed planning CT contours for the bladder, rectum and prostate was 0.80, 0.79 and 0.75, respectively. The accuracy significantly improved to 0.89, 0.84 and 0.84 (P<0.001 for all) when using the refined deformation field. The average time to run STAPLE with five scans and refine the planning CT deformation was 66 seconds on a Telsa K20c GPU. Conclusion: Accurate contours generated from multiple CBCTs provided guidance for CT to CBCT deformable registration, significantly improving registration accuracy as measured by contour DC. A more accurate deformation field is now available for transferring dose or electron density to the CBCT for adaptive planning. Research grant from Elekta

  14. Knee joint secondary motion accuracy improved by quaternion-based optimizer with bony landmark constraints.

    Science.gov (United States)

    Wang, Hongsheng; Zheng, Naiqaun Nigel

    2010-12-01

    Skin marker-based motion analysis has been widely used in biomechanical studies and clinical applications. Unfortunately, the accuracy of knee joint secondary motions is largely limited by the nonrigidity nature of human body segments. Numerous studies have investigated the characteristics of soft tissue movement. Utilizing these characteristics, we may improve the accuracy of knee joint motion measurement. An optimizer was developed by incorporating the soft tissue movement patterns at special bony landmarks into constraint functions. Bony landmark constraints were assigned to the skin markers at femur epicondyles, tibial plateau edges, and tibial tuberosity in a motion analysis algorithm by limiting their allowed position space relative to the underlying bone. The rotation matrix was represented by quaternion, and the constrained optimization problem was solved by Fletcher's version of the Levenberg-Marquardt optimization technique. The algorithm was validated by using motion data from both skin-based markers and bone-mounted markers attached to fresh cadavers. By comparing the results with the ground truth bone motion generated from the bone-mounted markers, the new algorithm had a significantly higher accuracy (root-mean-square (RMS) error: 0.7 ± 0.1 deg in axial rotation and 0.4 ± 0.1 deg in varus-valgus) in estimating the knee joint secondary rotations than algorithms without bony landmark constraints (RMS error: 1.7 ± 0.4 deg in axial rotation and 0.7 ± 0.1 deg in varus-valgus). Also, it predicts a more accurate medial-lateral translation (RMS error: 0.4 ± 0.1 mm) than the conventional techniques (RMS error: 1.2 ± 0.2 mm). The new algorithm, using bony landmark constrains, estimates more accurate secondary rotations and medial-lateral translation of the underlying bone.

  15. Prediction of soil properties using imaging spectroscopy: Considering fractional vegetation cover to improve accuracy

    Science.gov (United States)

    Franceschini, M. H. D.; Demattê, J. A. M.; da Silva Terra, F.; Vicente, L. E.; Bartholomeus, H.; de Souza Filho, C. R.

    2015-06-01

    Spectroscopic techniques have become attractive to assess soil properties because they are fast, require little labor and may reduce the amount of laboratory waste produced when compared to conventional methods. Imaging spectroscopy (IS) can have further advantages compared to laboratory or field proximal spectroscopic approaches such as providing spatially continuous information with a high density. However, the accuracy of IS derived predictions decreases when the spectral mixture of soil with other targets occurs. This paper evaluates the use of spectral data obtained by an airborne hyperspectral sensor (ProSpecTIR-VS - Aisa dual sensor) for prediction of physical and chemical properties of Brazilian highly weathered soils (i.e., Oxisols). A methodology to assess the soil spectral mixture is adapted and a progressive spectral dataset selection procedure, based on bare soil fractional cover, is proposed and tested. Satisfactory performances are obtained specially for the quantification of clay, sand and CEC using airborne sensor data (R2 of 0.77, 0.79 and 0.54; RPD of 2.14, 2.22 and 1.50, respectively), after spectral data selection is performed; although results obtained for laboratory data are more accurate (R2 of 0.92, 0.85 and 0.75; RPD of 3.52, 2.62 and 2.04, for clay, sand and CEC, respectively). Most importantly, predictions based on airborne-derived spectra for which the bare soil fractional cover is not taken into account show considerable lower accuracy, for example for clay, sand and CEC (RPD of 1.52, 1.64 and 1.16, respectively). Therefore, hyperspectral remotely sensed data can be used to predict topsoil properties of highly weathered soils, although spectral mixture of bare soil with vegetation must be considered in order to achieve an improved prediction accuracy.

  16. Improving Cartosat-1 DEM accuracy using synthetic stereo pair and triplet

    Science.gov (United States)

    Giribabu, D.; Srinivasa Rao, S.; Krishna Murthy, Y. V. N.

    2013-03-01

    Cartosat-1 is the first Indian Remote Sensing Satellite capable of providing along-track stereo images. Cartosat-1 provides forward stereo images with look angles +26° and -5° with respect to nadir for generating Digital Elevation Models (DEMs), Orthoimages and value added products for various applications. A pitch bias of -21° to the satellite resulted in giving reverse tilt mode stereo pair with look angles of +5° and -26° with respect to nadir. This paper compares DEMs generated using forward, reverse and other possible synthetic stereo pairs for two different types of topographies. Stereo triplet was used to generate DEM for Himalayan mountain topography to overcome the problem of occlusions. For flat to undulating topography it was shown that using Cartosat-1 synthetic stereo pair with look angles of -26° and +26° will produce improved version of DEM. Planimetric and height accuracy (Root Mean Square Error (RMSE)) of less than 2.5 m and 2.95 m respectively were obtained and qualitative analysis shows finer details in comparison with other DEMs. For rugged terrain and steep slopes of Himalayan mountain topography simple stereo pairs may not provide reliable accuracies in DEMs due to occlusions and shadows. Stereo triplet from Cartosat-1 was used to generate DEM for mountainous topography. This DEM shows better reconstruction of elevation model even at occluded region when compared with simple stereo pair based DEM. Planimetric and height accuracy (RMSE) of nearly 3 m were obtained and qualitative analysis shows reduction of outliers at occluded region.

  17. Establishment of the laser induced breakdown spectroscopy in a vacuum atmosphere for a accuracy improvement

    International Nuclear Information System (INIS)

    This report describes the fundamentals of the Laser Induced Breakdown Spectroscopy(LIBS), and it describes the quantitative analysis method in the vacuum condition to obtain a high measurement accuracy. The LIBS system employs the following major components: a pulsed laser, a gas chamber, an emission spectrometer, a detector, and a computer. When the output from a pulsed laser is focused onto a small spot on a sample, an optically induced plasma, called a laser-induced plasma (LIP) is formed at the surface. The LIBS is a laser-based sensitive optical technique used to detect certain atomic and molecular species by monitoring the emission signals from a LIP. This report was described a fundamentals of the LIBS and current states of research. And, It was described a optimization of measurement condition and characteristic analysis of a LIP by measurement of the fundamental metals. The LIBS system shows about a 0.63 ∼ 5.82% measurement errors and calibration curve for the 'Cu, Cr and Ni'. It also shows about a 5% less of a measurement errors and calibration curve for a Nd and Sm. As a result, the LIBS accuracy for a part was little improved than preexistence by the optimized condition

  18. Improving the accuracy of skin elasticity measurement by using Q-parameters in Cutometer.

    Science.gov (United States)

    Qu, Di; Seehra, G Paul

    2016-01-01

    The skin elasticity parameters (Ue, Uv, Uf, Ur, Ua, and R0 through R9) in the Cutometer are widely used for in vivo measurement of skin elasticity. Their accuracy, however, is impaired by the inadequacy of the definition of a key parameter, the time point of 0.1 s, which separates the elastic and viscoelastic responses of human skin. This study shows why an inflection point (t(IP)) should be calculated from each individual response curve to define skin elasticity, and how the Q-parameters are defined in the Cutometer. By analyzing the strain versus time curves of some pure elastic standards and of a population of 746 human volunteers, a method of determining the t(IP) from each mode 1 response curve was established. The results showed a wide distribution of this parameter ranging from 0.11 to 0.19 s, demonstrating that the current single-valued empirical parameter of 0.1 s was not adequate to represent this property of skin. A set of area-based skin viscoelastic parameters were also defined. The biological elasticity thus obtained correlated well with the study volunteers' chronological age which was statistically significant. We conclude that the Q-parameters are more accurate than the U and R parameters and should be used to improve measurement accuracy of human skin elasticity. PMID:27319059

  19. A metrological approach to improve accuracy and reliability of ammonia measurements in ambient air

    Science.gov (United States)

    Pogány, Andrea; Balslev-Harder, David; Braban, Christine F.; Cassidy, Nathan; Ebert, Volker; Ferracci, Valerio; Hieta, Tuomas; Leuenberger, Daiana; Martin, Nicholas A.; Pascale, Céline; Peltola, Jari; Persijn, Stefan; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard

    2016-11-01

    The environmental impacts of ammonia (NH3) in ambient air have become more evident in the recent decades, leading to intensifying research in this field. A number of novel analytical techniques and monitoring instruments have been developed, and the quality and availability of reference gas mixtures used for the calibration of measuring instruments has also increased significantly. However, recent inter-comparison measurements show significant discrepancies, indicating that the majority of the newly developed devices and reference materials require further thorough validation. There is a clear need for more intensive metrological research focusing on quality assurance, intercomparability and validations. MetNH3 (Metrology for ammonia in ambient air) is a three-year project within the framework of the European Metrology Research Programme (EMRP), which aims to bring metrological traceability to ambient ammonia measurements in the 0.5–500 nmol mol‑1 amount fraction range. This is addressed by working in three areas: (1) improving accuracy and stability of static and dynamic reference gas mixtures, (2) developing an optical transfer standard and (3) establishing the link between high-accuracy metrological standards and field measurements. In this article we describe the concept, aims and first results of the project.

  20. Accuracy Improvement of Discharge Measurement with Modification of Distance Made Good Heading

    Directory of Open Access Journals (Sweden)

    Jongkook Lee

    2016-01-01

    Full Text Available Remote control boats equipped with an Acoustic Doppler Current Profiler (ADCP are widely accepted and have been welcomed by many hydrologists for water discharge, velocity profile, and bathymetry measurements. The advantages of this technique include high productivity, fast measurements, operator safety, and high accuracy. However, there are concerns about controlling and operating a remote boat to achieve measurement goals, especially during extreme events such as floods. When performing river discharge measurements, the main error source stems from the boat path. Due to the rapid flow in a flood condition, the boat path is not regular and this can cause errors in discharge measurements. Therefore, improvement of discharge measurements requires modification of boat path. As a result, the measurement errors in flood flow conditions are 12.3–21.8% before the modification of boat path, but 1.2–3.7% after the DMG modification of boat path. And it is considered that the modified discharges are very close to the observed discharge in the flood flow conditions. In this study, through the distance made good (DMG modification of the boat path, a comprehensive discharge measurement with high accuracy can be achieved.

  1. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Directory of Open Access Journals (Sweden)

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  2. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium.

    Science.gov (United States)

    Ramstein, Guillaume P; Evans, Joseph; Kaeppler, Shawn M; Mitchell, Robert B; Vogel, Kenneth P; Buell, C Robin; Casler, Michael D

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families' parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  3. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    Directory of Open Access Journals (Sweden)

    José Neuman de Souza

    2011-10-01

    Full Text Available This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN. Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction.

  4. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    Directory of Open Access Journals (Sweden)

    M. Mosleh E. Abu Samak

    2016-04-01

    Full Text Available This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD methods, the alternating direction implicit (ADI-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  5. Improved radioenzymatic assay for plasma norepinephrine using purified phenylethanolamine n-methyltransferase

    International Nuclear Information System (INIS)

    Radioenzymatic assays have been developed for catecholamines using either catechol O-methyltransferase (COMT) or phenylethanolamine N-methyltransferase (PNMT). Assays using PNMT are specific for norepinephrine (NE) and require minimal manipulative effort but until now have been less sensitive than the more complex procedures using COMT. The authors report an improved purification scheme for bovine PNMT which has permitted development of an NE assay with dramatically improved sensitivity (0.5 pg), specificity and reproducibility (C.V. < 5%). PNMT was purified by sequential pH 5.0 treatment and dialysis and by column chromatographic procedures using DEAE-Sephacel, Sepharcryl S-200 and Phenyl-Boronate Agarose. Recovery of PNMT through the purification scheme was 50%, while blank recovery was <.001%. NE can be directly quantified in 25 ul of human plasma and an 80 tube assay can be completed within 4 h. The capillary to venous plasma NE gradient was examined in 8 normotensive male subjects. Capillary plasma (NE (211.2 +/- 61.3 pg/ml)) was lower than venous plasma NE (366.6 +/- 92.5 pg/ml) in all subjects (p < 0.005). This difference suggests that capillary (NE) may be a unique indicator of sympathetic nervous system activity in vivo. In conclusion, purification of PNMT has facilitated development of an improved radioenzymatic for NE with significantly improved sensitivity

  6. The use of imprecise processing to improve accuracy in weather and climate prediction

    International Nuclear Information System (INIS)

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  7. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    Science.gov (United States)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  8. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    Science.gov (United States)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a

  9. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    Directory of Open Access Journals (Sweden)

    Marta Torralba

    2016-01-01

    Full Text Available Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively.

  10. Diagnostic accuracy of an IgM enzyme-linked immunosorbent assay and comparison with 2 polymerase chain reactions for early diagnosis of human leptospirosis.

    Science.gov (United States)

    Vanasco, N B; Jacob, P; Landolt, N; Chiani, Y; Schmeling, M F; Cudos, C; Tarabla, H; Lottersberger, J

    2016-04-01

    Enzyme-linked immunosorbent assay (ELISA) tests and polymerase chain reaction (PCR) may play a key role for early detection and treatment of human leptospirosis in developing countries. The aims of this study were to develop and validate an IgM ELISA under field conditions and to compare the diagnostic accuracy among IgG, IgM ELISAs, conventional PCR (cPCR), and real-time PCR (rtPCR) for early detection of human leptospirosis. Overall accuracy of IgM ELISA was sensitivity of 87.9%, specificity of 97.0%, and area under the curve of 0.940. When the 4 methods were compared, IgM ELISA showed the greatest diagnostic accuracy (J=0.6) followed by rtPCR (J=0.4), cPCR (J=0.2) and IgG ELISA (J=0.1). Our results support the use of IgM ELISA and rtPCR for early diagnosis of the disease. Moreover, due to their high specificity, they could be also useful to replace or supplement microscopic agglutination test as a confirmatory test, allowing more confirmations.

  11. Optimal Cutoff and Accuracy of an IgM Enzyme-Linked Immunosorbent Assay for Diagnosis of Acute Scrub Typhus in Northern Thailand: an Alternative Reference Method to the IgM Immunofluorescence Assay

    Science.gov (United States)

    Blacksell, Stuart D.; Tanganuchitcharnchai, Ampai; Jintaworn, Suthatip; Kantipong, Pacharee; Richards, Allen L.; Day, Nicholas P. J.

    2016-01-01

    The enzyme-linked immunosorbent assay (ELISA) has been proposed as an alternative serologic diagnostic test to the indirect immunofluorescence assay (IFA) for scrub typhus. Here, we systematically determine the optimal sample dilution and cutoff optical density (OD) and estimate the accuracy of IgM ELISA using Bayesian latent class models (LCMs). Data from 135 patients with undifferentiated fever were reevaluated using Bayesian LCMs. Every patient was evaluated for the presence of an eschar and tested with a blood culture for Orientia tsutsugamushi, three different PCR assays, and an IgM IFA. The IgM ELISA was performed for every sample at sample dilutions from 1:100 to 1:102,400 using crude whole-cell antigens of the Karp, Kato, and Gilliam strains of O. tsutsugamushi developed by the Naval Medical Research Center. We used Bayesian LCMs to generate unbiased receiver operating characteristic curves and found that the sample dilution of 1:400 was optimal for the IgM ELISA. With the optimal cutoff OD of 1.474 at a sample dilution of 1:400, the IgM ELISA had a sensitivity of 85.7% (95% credible interval [CrI], 77.4% to 86.7%) and a specificity of 98.1% (95% CrI, 97.2% to 100%) using paired samples. For the ELISA, the OD could be determined objectively and quickly, in contrast to the reading of IFA slides, which was both subjective and labor-intensive. The IgM ELISA for scrub typhus has high diagnostic accuracy and is less subjective than the IgM IFA. We suggest that the IgM ELISA may be used as an alternative reference test to the IgM IFA for the serological diagnosis of scrub typhus. PMID:27008880

  12. Optimal Cutoff and Accuracy of an IgM Enzyme-Linked Immunosorbent Assay for Diagnosis of Acute Scrub Typhus in Northern Thailand: an Alternative Reference Method to the IgM Immunofluorescence Assay.

    Science.gov (United States)

    Blacksell, Stuart D; Lim, Cherry; Tanganuchitcharnchai, Ampai; Jintaworn, Suthatip; Kantipong, Pacharee; Richards, Allen L; Paris, Daniel H; Limmathurotsakul, Direk; Day, Nicholas P J

    2016-06-01

    The enzyme-linked immunosorbent assay (ELISA) has been proposed as an alternative serologic diagnostic test to the indirect immunofluorescence assay (IFA) for scrub typhus. Here, we systematically determine the optimal sample dilution and cutoff optical density (OD) and estimate the accuracy of IgM ELISA using Bayesian latent class models (LCMs). Data from 135 patients with undifferentiated fever were reevaluated using Bayesian LCMs. Every patient was evaluated for the presence of an eschar and tested with a blood culture for Orientia tsutsugamushi, three different PCR assays, and an IgM IFA. The IgM ELISA was performed for every sample at sample dilutions from 1:100 to 1:102,400 using crude whole-cell antigens of the Karp, Kato, and Gilliam strains of O. tsutsugamushi developed by the Naval Medical Research Center. We used Bayesian LCMs to generate unbiased receiver operating characteristic curves and found that the sample dilution of 1:400 was optimal for the IgM ELISA. With the optimal cutoff OD of 1.474 at a sample dilution of 1:400, the IgM ELISA had a sensitivity of 85.7% (95% credible interval [CrI], 77.4% to 86.7%) and a specificity of 98.1% (95% CrI, 97.2% to 100%) using paired samples. For the ELISA, the OD could be determined objectively and quickly, in contrast to the reading of IFA slides, which was both subjective and labor-intensive. The IgM ELISA for scrub typhus has high diagnostic accuracy and is less subjective than the IgM IFA. We suggest that the IgM ELISA may be used as an alternative reference test to the IgM IFA for the serological diagnosis of scrub typhus.

  13. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    Science.gov (United States)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  14. Coval: improving alignment quality and variant calling accuracy for next-generation sequencing data.

    Directory of Open Access Journals (Sweden)

    Shunichi Kosugi

    Full Text Available Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in 'targeted' alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/.

  15. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    Science.gov (United States)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  16. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    Science.gov (United States)

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  17. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    Science.gov (United States)

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  18. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    Science.gov (United States)

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  19. An improved high-throughput screening assay for tunicamycin sensitivity in Arabidopsis seedlings

    Directory of Open Access Journals (Sweden)

    Maggie E McCormack

    2015-08-01

    Full Text Available Tunicamycin sensitivity assays are a useful method for studies of endoplasmic reticulum stress and the unfolded protein response in eukaryotic cells. While tunicamycin sensitivity and tunicamycin recovery assays have been previously described, these existing methods are time-consuming, labor intensive and subjected to mechanical wounding. This study shows an improved method of testing tunicamycin sensitivity in Arabidopsis using liquid Murashige and Skoog medium versus the traditional solid agar plates. Liquid medium bypasses the physical manipulation of seedlings, thereby eliminating the risk of potential mechanical damage and additional unwanted stress to seedlings. Seedlings were subjected to comparative treatments with various concentrations of tunicamycin on both solid and liquid media and allowed to recover. Determination of fresh weight, chlorophyll contents analysis and qRT-PCR results confirm the efficacy of using liquid medium to perform quantitative tunicamycin stress assays.

  20. Improving the assessment of ICESat water altimetry accuracy accounting for autocorrelation

    Science.gov (United States)

    Abdallah, Hani; Bailly, Jean-Stéphane; Baghdadi, Nicolas; Lemarquand, Nicolas

    2011-11-01

    Given that water resources are scarce and are strained by competing demands, it has become crucial to develop and improve techniques to observe the temporal and spatial variations in the inland water volume. Due to the lack of data and the heterogeneity of water level stations, remote sensing, and especially altimetry from space, appear as complementary techniques for water level monitoring. In addition to spatial resolution and sampling rates in space or time, one of the most relevant criteria for satellite altimetry on inland water is the accuracy of the elevation data. Here, the accuracy of ICESat LIDAR altimetry product is assessed over the Great Lakes in North America. The accuracy assessment method used in this paper emphasizes on autocorrelation in high temporal frequency ICESat measurements. It also considers uncertainties resulting from both in situ lake level reference data. A probabilistic upscaling process was developed. This process is based on several successive ICESat shots averaged in a spatial transect accounting for autocorrelation between successive shots. The method also applies pre-processing of the ICESat data with saturation correction of ICESat waveforms, spatial filtering to avoid measurement disturbance from the land-water transition effects on waveform saturation and data selection to avoid trends in water elevations across space. Initially this paper analyzes 237 collected ICESat transects, consistent with the available hydrometric ground stations for four of the Great Lakes. By adapting a geostatistical framework, a high frequency autocorrelation between successive shot elevation values was observed and then modeled for 45% of the 237 transects. The modeled autocorrelation was therefore used to estimate water elevations at the transect scale and the resulting uncertainty for the 117 transects without trend. This uncertainty was 8 times greater than the usual computed uncertainty, when no temporal correlation is taken into account. This

  1. Study on Improvement of Accuracy in Inertial Photogrammetry by Combining Images with Inertial Measurement Unit

    Science.gov (United States)

    Kawasaki, Hideaki; Anzai, Shojiro; Koizumi, Toshio

    2016-06-01

    Inertial photogrammetry is defined as photogrammetry that involves using a camera on which an inertial measurement unit (IMU) is mounted. In inertial photogrammetry, the position and inclination of a shooting camera are calculated using the IMU. An IMU is characterized by error growth caused by time accumulation because acceleration is integrated with respect to time. This study examines the procedure to estimate the position of the camera accurately while shooting using the IMU and the structure from motion (SfM) technology, which is applied in many fields, such as computer vision. When neither the coordinates of the position of the camera nor those of feature points are known, SfM provides a similar positional relationship between the position of the camera and feature points. Therefore, the actual length of positional coordinates is not determined. If the actual length of the position of the camera is unknown, the camera acceleration is obtained by calculating the second order differential of the position of the camera, with respect to the shooting time. The authors had determined the actual length by assigning the position of IMU to the SfM-calculated position. Hence, accuracy decreased because of the error growth, which was the characteristic feature of IMU. In order to solve this problem, a new calculation method was proposed. Using this method, the difference between the IMU-calculated acceleration and the camera-calculated acceleration can be obtained using the method of least squares, and the magnification required for calculating the actual dimension from the position of the camera can be obtained. The actual length can be calculated by multiplying all the SfM point groups by the obtained magnification factor. This calculation method suppresses the error growth, which is due to the time accumulation in IMU, and improves the accuracy of inertial photogrammetry.

  2. On the use of Numerical Weather Models for improving SAR geolocation accuracy

    Science.gov (United States)

    Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.

    2013-12-01

    Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of

  3. Improving Decision Speed, Accuracy and Group Cohesion through Early Information Gathering in House-Hunting Ants

    OpenAIRE

    Nathalie Stroeymeyt; Martin Giurfa; Franks, Nigel R.

    2010-01-01

    BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations...

  4. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    Science.gov (United States)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  5. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    Science.gov (United States)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  6. Improving Sensing Accuracy in Cognitive PANs through Modulation of Sensing Probability

    Directory of Open Access Journals (Sweden)

    Vojislav B. Mišić

    2009-01-01

    Full Text Available Cognitive radio technology necessitates accurate and timely sensing of primary users' activity on the chosen set of channels. The simplest selection procedure is a simple random choice of channels to be sensed, but the impact of sensing errors with respect to primary user activity or inactivity differs considerably. In order to improve sensing accuracy and increase the likelihood of finding channels which are free from primary user activity, the selection procedure is modified by assigning different sensing probabilities to active and inactive channels. The paper presents a probabilistic analysis of this policy and investigates the range of values in which the modulation of sensing probability is capable of maintaining an accurate view of the status of the working channel set. We also present a modification of the probability modulation algorithm that allows for even greater reduction of sensing error in a limited range of the duty cycle of primary users' activity. Finally, we give some guidelines as to the optimum application ranges for the original and modified algorithm, respectively.

  7. Free Form Deformation–Based Image Registration Improves Accuracy of Traction Force Microscopy

    Science.gov (United States)

    Jorge-Peñas, Alvaro; Izquierdo-Alvarez, Alicia; Aguilar-Cuenca, Rocio; Vicente-Manzanares, Miguel; Garcia-Aznar, José Manuel; Van Oosterwyck, Hans; de-Juan-Pardo, Elena M.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate

    2015-01-01

    Traction Force Microscopy (TFM) is a widespread method used to recover cellular tractions from the deformation that they cause in their surrounding substrate. Particle Image Velocimetry (PIV) is commonly used to quantify the substrate’s deformations, due to its simplicity and efficiency. However, PIV relies on a block-matching scheme that easily underestimates the deformations. This is especially relevant in the case of large, locally non-uniform deformations as those usually found in the vicinity of a cell’s adhesions to the substrate. To overcome these limitations, we formulate the calculation of the deformation of the substrate in TFM as a non-rigid image registration process that warps the image of the unstressed material to match the image of the stressed one. In particular, we propose to use a B-spline -based Free Form Deformation (FFD) algorithm that uses a connected deformable mesh to model a wide range of flexible deformations caused by cellular tractions. Our FFD approach is validated in 3D fields using synthetic (simulated) data as well as with experimental data obtained using isolated endothelial cells lying on a deformable, polyacrylamide substrate. Our results show that FFD outperforms PIV providing a deformation field that allows a better recovery of the magnitude and orientation of tractions. Together, these results demonstrate the added value of the FFD algorithm for improving the accuracy of traction recovery. PMID:26641883

  8. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Science.gov (United States)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  9. Accuracy improvement of T-history method for measuring heat of fusion of various materials

    Energy Technology Data Exchange (ETDEWEB)

    Hiki Hong [KyungHee University (Korea). School of Mechanical and Industrial Systems Engineering; Sun Kuk Kim [KyungHee University (Korea). School of Architecture and Civil Engineering; Yong-Shik Kim [University of Incheon (Korea). Dept. of Architectural Engineering

    2004-06-01

    T-history method, developed for measuring heat-of-fusion of phase change material (PCM) in sealed tubes, has the advantages of a simple experimental device and convenience with no sampling process. However, some improper assumptions in the original method, such as using a degree of supercooling as the end of latent heat period and neglecting sensible heat during phase change, can cause significant errors in determining the heat of fusion. We have improved this problem in order to predict better results. The present study shows that the modified T-history method is successfully applied to a variety of PCMs such as paraffin and lauric acid having no or a low degree of supercooling. Also it turned out that selected periods for sensible and latent heat do not significantly affect the accuracy of heat- of-fusion. As a result, the method can provide an appropriate means to assess a newly developed PCM by a cycle test even if a very accurate value cannot be obtained. (author)

  10. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  11. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    Science.gov (United States)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  12. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.

    Science.gov (United States)

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  13. Accuracy, Effectiveness and Improvement of Vibration-Based Maintenance in Paper Mills: Case Studies

    Science.gov (United States)

    AL-NAJJAR, B.

    2000-01-01

    Many current vibration-based maintenance (VBM) policies for rolling element bearings do not use as much as possible of their useful lives. Evidence and indications to prolong the bearings' mean effective lives by using more accurate diagnosis and prognosis are confirmed when faulty bearing installation, faulty machinery design, harsh environmental condition and when a bearing is replaced as soon as its vibration level exceeds the normal. Analysis of data from roller bearings at two paper mills suggests that longer bearing lives can be safely achieved by increasing the accuracy of the vibration data. This paper relates bearing failure modes to the observed vibration spectra and their development patterns over the bearings' lives. A systematic approach, which describes the objectives and performance of studies in two Swedish paper mills, is presented. Explanations of the mechanisms behind some frequent modes of early failure and ways to avoid them are suggested. It is shown theoretically, and partly confirmed by the analysis of (unfortunately incomplete) data from two paper mills over many years, that accurate prediction of remaining bearing life requires: (a) enough vibration measurements, (b) numerate records of operating conditions, (c) better discrimination between frequencies in the spectrum and (d) correlation of (b) and (c). This is because life prediction depends on precise knowledge of primary, harmonic and side-band frequency amplitudes and their development over time. Further, the available data, which are collected from relevant plant activities, can be utilized to perform cyclic improvements in diagnosis, prognosis, experience and economy.

  14. Multimodal nonlinear optical microscopy improves the accuracy of early diagnosis of squamous intraepithelial neoplasia

    Science.gov (United States)

    Teh, Seng Khoon; Zheng, Wei; Li, Shuxia; Li, Dong; Zeng, Yan; Yang, Yanqi; Qu, Jianan Y.

    2013-03-01

    We explore diagnostic utility of a multicolor excitation multimodal nonlinear optical (NLO) microscopy for noninvasive detection of squamous epithelial precancer in vivo. The 7,12-dimenthylbenz(a)anthracene treated hamster cheek pouch was used as an animal model of carcinogenesis. The NLO microscope system employed was equipped with the ability to collect multiple tissue endogenous NLO signals such as two-photon excited fluorescence of keratin, nicotinamide adenine dinucleotide, collagen, and tryptophan, and second harmonic generation of collagen in spectral and time domains simultaneously. A total of 34 (11 controlled and 23 treated) Golden Syrian hamsters with 62 in vivo spatially distinct measurement sites were assessed in this study. High-resolution label-free NLO images were acquired from stratum corneum, stratum granulosum-stratum basale, and stroma for all tissue measurement sites. A total of nine and eight features from 745 and 600 nm excitation wavelengths, respectively, involving tissue structural and intrinsic biochemical properties were found to contain significant diagnostic information for precancers detection (p<0.05). Particularly, 600 nm excited tryptophan fluorescence signals emanating from stratum corneum was revealed to provide remarkable diagnostic utility. Multivariate statistical techniques confirmed the integration of diagnostically significant features from multicolor excitation wavelengths yielded improved diagnostic accuracy as compared to using the individual wavelength alone.

  15. Improving Estimation Accuracy of Quasars’ Photometric Redshifts by Integration of KNN and SVM

    Science.gov (United States)

    Han, Bo; Ding, Hongpeng; Zhang, Yanxia; Zhao, Yongheng

    2015-08-01

    The massive photometric data collected from multiple large-scale sky surveys offers significant opportunities for measuring distances of many celestial objects by photometric redshifts zphot in a wide coverage of the sky. However, catastrophic failure, an unsolved problem for a long time, exists in the current photometric redshift estimation approaches (such as k-nearest-neighbor). In this paper, we propose a novel two-stage approach by integration of k-nearest-neighbor (KNN) and support vector machine (SVM) methods together. In the first stage, we apply KNN algorithm on photometric data and estimate their corresponding zphot. By analysis, we observe two dense regions with catastrophic failure, one in the range of zphot [0.1,1.1], the other in the range of zphot [1.5,2.5]. In the second stage, we map the photometric multiband input pattern of points falling into the two ranges from original attribute space into high dimensional feature space by Gaussian kernel function in SVM. In the high dimensional feature space, many bad estimation points resulted from catastrophic failure by using simple Euclidean distance computation in KNN can be identified by classification hyperplane SVM and further be applied correction. Experimental results based on SDSS data for quasars showed that the two-stage fusion approach can significantly mitigate catastrophic failure and improve the estimation accuracy of photometric redshift.

  16. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    Science.gov (United States)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  17. Accuracy Improvement of ASTER Stereo Satellite Generated DEM Using Texture Filter

    Institute of Scientific and Technical Information of China (English)

    Mandla V. Ravibabu; Kamal Jain; Surendra Pal Singh; Naga Jyothi Meeniga

    2010-01-01

    The grid DEM (digital elevation model) generation can be from any of a number of sources: for instance, analogue to digital conversion of contour maps followed by application of the TIN model, or direct elevation point modelling via digital photogrammetry applied to airborne images or satellite images. Currently, apart from the deployment of point-clouds from LiDAR data acquisition, the generally favoured approach refers to applications of digital photogrammetry. One of the most important steps in such deployment is the stereo matching process for conjugation point (pixel) establishment: very difficult in modelling any homogenous areas like water cover or forest canopied areas due to the lack of distinct spatial features. As a result, application of automated procedures is sure to generate erroneous elevation values. In this paper, we present and apply a method for improving the quality of stereo DEMs generated via utilization of an entropy texture filter. The filter was applied for extraction of homogenous areas before stereo matching so that a statistical texture filter could then be applied for removing anomalous evaluation values prior to interpolation and accuracy assessment via deployment of a spatial correlation technique. For exemplification, we used a stereo pair of ASTER 1B images.

  18. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    International Nuclear Information System (INIS)

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions

  19. A multi breed reference improves genotype imputation accuracy in Nordic Red cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø;

    2012-01-01

    612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check.......39 and 0.87 percent to 1.75, 0.59 and 0.54 percent for respectively Danish, Swedish and Fi nnish Red when going from single national reference to a combined Red reference. The larger error rate in the Danish population was caused by a subgroup of 10 animals showing a large proportion of Holstein genetics...

  20. A multi breed reference improves genotype imputation accuracy in Nordic Red cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø;

    612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check.......39 and 0.87 percent to 1.75, 0.59 and 0.54 percent for respectively Danish, Swedish and Fi nnish Red when going from single national reference to a combined Red reference. The larger error rate in the Danish population was caused by a subgroup of 10 animals showing a large proportion of Holstein genetics...

  1. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    Science.gov (United States)

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  2. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    Science.gov (United States)

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-01

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty.

  3. Improving the accuracy of brain tumor surgery via Raman-based technology

    Science.gov (United States)

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W.; Xie, X. Sunney; Orringer, Daniel A.

    2016-01-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors. PMID:26926067

  4. Improve accuracy and sensibility in glycan structure prediction by matching glycan isotope abundance

    Energy Technology Data Exchange (ETDEWEB)

    Xu Guang [College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan (China); National Research Council Canada, Ottawa, Ont., K1A 0R6 (Canada); Liu Xin [College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan (China); Liu Qingyan [National Research Council Canada, Ottawa, Ont., Canada K1A 0R6 (Canada); Zhou Yanhong, E-mail: yhzhou@mail.hust.edu.cn [College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan (China); Li Jianjun, E-mail: Jianjun.Li@nrc-cnrc.gc.ca [National Research Council Canada, Ottawa, Ont., Canada K1A 0R6 (Canada)

    2012-09-19

    Highlights: Black-Right-Pointing-Pointer A glycan isotope pattern recognition strategy for glycomics. Black-Right-Pointing-Pointer A new data preprocessing procedure to detect ion peaks in a giving MS spectrum. Black-Right-Pointing-Pointer A linear soft margin SVM classification for isotope pattern recognition. - Abstract: Mass Spectrometry (MS) is a powerful technique for the determination of glycan structures and is capable of providing qualitative and quantitative information. Recent development in computational method offers an opportunity to use glycan structure databases and de novo algorithms for extracting valuable information from MS or MS/MS data. However, detecting low-intensity peaks that are buried in noisy data sets is still a challenge and an algorithm for accurate prediction and annotation of glycan structures from MS data is highly desirable. The present study describes a novel algorithm for glycan structure prediction by matching glycan isotope abundance (mGIA), which takes isotope masses, abundances, and spacing into account. We constructed a comprehensive database containing 808 glycan compositions and their corresponding isotope abundance. Unlike most previously reported methods, not only did we take into count the m/z values of the peaks but also their corresponding logarithmic Euclidean distance of the calculated and detected isotope vectors. Evaluation against a linear classifier, obtained by training mGIA algorithm with datasets of three different human tissue samples from Consortium for Functional Glycomics (CFG) in association with Support Vector Machine (SVM), was proposed to improve the accuracy of automatic glycan structure annotation. In addition, an effective data preprocessing procedure, including baseline subtraction, smoothing, peak centroiding and composition matching for extracting correct isotope profiles from MS data was incorporated. The algorithm was validated by analyzing the mouse kidney MS data from CFG, resulting in the

  5. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers

    Science.gov (United States)

    Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Shen, Liangbo; Todorich, Bozho; Shieh, Christine; Kuo, Anthony; Toth, Cynthia; Izatt, Joseph A.

    2016-03-01

    Ophthalmic surgeons manipulate micron-scale tissues using stereopsis through an operating microscope and instrument shadowing for depth perception. While ophthalmic microsurgery has benefitted from rapid advances in instrumentation and techniques, the basic principles of the stereo operating microscope have not changed since the 1930's. Optical Coherence Tomography (OCT) has revolutionized ophthalmic imaging and is now the gold standard for preoperative and postoperative evaluation of most retinal and many corneal procedures. We and others have developed initial microscope-integrated OCT (MIOCT) systems for concurrent OCT and operating microscope imaging, but these are limited to 2D real-time imaging and require offline post-processing for 3D rendering and visualization. Our previously presented 4D MIOCT system can record and display the 3D surgical field stereoscopically through the microscope oculars using a dual-channel heads-up display (HUD) at up to 10 micron-scale volumes per second. In this work, we show that 4D MIOCT guidance improves the accuracy of depth-based microsurgical maneuvers (with statistical significance) in mock surgery trials in a wet lab environment. Additionally, 4D MIOCT was successfully performed in 38/45 (84%) posterior and 14/14 (100%) anterior eye human surgeries, and revealed previously unrecognized lesions that were invisible through the operating microscope. These lesions, such as residual and potentially damaging retinal deformation during pathologic membrane peeling, were visualized in real-time by the surgeon. Our integrated system provides an enhanced 4D surgical visualization platform that can improve current ophthalmic surgical practice and may help develop and refine future microsurgical techniques.

  6. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    Science.gov (United States)

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  7. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    Science.gov (United States)

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  8. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    Science.gov (United States)

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic

  9. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    Science.gov (United States)

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  10. Improvement in accuracy of defect size measurement by automatic defect classification

    Science.gov (United States)

    Samir, Bhamidipati; Pereira, Mark; Paninjath, Sankaranarayanan; Jeon, Chan-Uk; Chung, Dong-Hoon; Yoon, Gi-Sung; Jung, Hong-Yul

    2015-10-01

    accurately estimating the size of the defect from the inspection images automatically. The sensitivity to weak defect signals, filtering out noise to identify the defect signals and locating the defect in the images are key success factors. The performance of the tool is assessed on programmable defect masks and production masks from HVM production flow. Implementation of Calibre® MDPAutoClassify™ is projected to improve the accuracy of defect size as compared to what is reported by inspection machine, which is very critical for production, and the classification of defects will aid in arriving at appropriate dispositions like SEM review, repair and scrap.

  11. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  12. Hybrid Indoor-Based WLAN-WSN Localization Scheme for Improving Accuracy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Zahid Farid

    2016-01-01

    Full Text Available In indoor environments, WiFi (RSS based localization is sensitive to various indoor fading effects and noise during transmission, which are the main causes of localization errors that affect its accuracy. Keeping in view those fading effects, positioning systems based on a single technology are ineffective in performing accurate localization. For this reason, the trend is toward the use of hybrid positioning systems (combination of two or more wireless technologies in indoor/outdoor localization scenarios for getting better position accuracy. This paper presents a hybrid technique to implement indoor localization that adopts fingerprinting approaches in both WiFi and Wireless Sensor Networks (WSNs. This model exploits machine learning, in particular Artificial Natural Network (ANN techniques, for position calculation. The experimental results show that the proposed hybrid system improved the accuracy, reducing the average distance error to 1.05 m by using ANN. Applying Genetic Algorithm (GA based optimization technique did not incur any further improvement to the accuracy. Compared to the performance of GA optimization, the nonoptimized ANN performed better in terms of accuracy, precision, stability, and computational time. The above results show that the proposed hybrid technique is promising for achieving better accuracy in real-world positioning applications.

  13. A Hybrid Data Mining Technique for Improving the Classification Accuracy of Microarray Data Set

    Directory of Open Access Journals (Sweden)

    Sujata Dash

    2012-04-01

    Full Text Available A major challenge in biomedical studies in recent years has been the classification of gene expression profiles into categories, such as cases and controls. This is done by first training a classifier by using a labeled training set containing labeled samples from the two populations, and then using that classifier to predict the labels of new samples. Such predictions have recently been shown to improve the diagnosis and treatment selection practices for several diseases. This procedure is complicated, however, by the high dimensionality of the data. While microarrays can measure the levels of thousands of genes per sample, case-control microarray studies usually involve no more than several dozen samples. Standard classifiers do not work well in these situations where the number of features (gene expression levels measured in these microarrays far exceeds the number of samples. Selecting only the features that are most relevant for discriminating between the two categories can help construct better classifiers, in terms of both accuracy and efficiency. This paper provides a comparison between dimension reduction technique, namely Partial Least Squares (PLSmethod and a hybrid feature selection scheme, and evaluates the relative performance of four different supervised classification procedures such as Radial Basis Function Network (RBFN, Multilayer Perceptron Network (MLP, Support Vector Machine using Polynomial kernel function(Polynomial- SVM and Support Vector Machine using RBF kernel function (RBF-SVM incorporating those methods. Experimental results show that the Partial Least-Squares(PLS regression method is an appropriate feature selection method and a combined use of different classification and feature selection approaches makes it possible to construct high performance classification models for microarray data

  14. A knowledge-guided strategy for improving the accuracy of scoring functions in binding affinity prediction

    Directory of Open Access Journals (Sweden)

    Wang Renxiao

    2010-04-01

    Full Text Available Abstract Background Current scoring functions are not very successful in protein-ligand binding affinity prediction albeit their popularity in structure-based drug designs. Here, we propose a general knowledge-guided scoring (KGS strategy to tackle this problem. Our KGS strategy computes the binding constant of a given protein-ligand complex based on the known binding constant of an appropriate reference complex. A good training set that includes a sufficient number of protein-ligand complexes with known binding data needs to be supplied for finding the reference complex. The reference complex is required to share a similar pattern of key protein-ligand interactions to that of the complex of interest. Thus, some uncertain factors in protein-ligand binding may cancel out, resulting in a more accurate prediction of absolute binding constants. Results In our study, an automatic algorithm was developed for summarizing key protein-ligand interactions as a pharmacophore model and identifying the reference complex with a maximal similarity to the query complex. Our KGS strategy was evaluated in combination with two scoring functions (X-Score and PLP on three test sets, containing 112 HIV protease complexes, 44 carbonic anhydrase complexes, and 73 trypsin complexes, respectively. Our results obtained on crystal structures as well as computer-generated docking poses indicated that application of the KGS strategy produced more accurate predictions especially when X-Score or PLP alone did not perform well. Conclusions Compared to other targeted scoring functions, our KGS strategy does not require any re-parameterization or modification on current scoring methods, and its application is not tied to certain systems. The effectiveness of our KGS strategy is in theory proportional to the ever-increasing knowledge of experimental protein-ligand binding data. Our KGS strategy may serve as a more practical remedy for current scoring functions to improve their

  15. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Energy Technology Data Exchange (ETDEWEB)

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  16. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    Directory of Open Access Journals (Sweden)

    Kyuman Lee

    2016-08-01

    Full Text Available The airborne relay-based positioning system (ARPS, which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference

  17. IMPROVING THE POSITIONING ACCURACY OF TRAIN ON THE APPROACH SECTION TO THE RAILWAY CROSSING

    Directory of Open Access Journals (Sweden)

    Havryliuk V. I.

    2016-02-01

    Full Text Available Purpose. In the paper it is necessary to analyze possibility of improving the positioning accuracy of train on the approach section to crossing for traffic safety control at railway crossings. Methodology. Researches were performed using developed mathematical model, describing dependence of the input impedance of the coded and audio frequency track circuits on a train coordinate at various values of ballast isolation resistances and for all usable frequencies. Findings. The paper presents the developed mathematical model, describing dependence of the input impedance of the coded and audio-frequency track circuits on the train coordinate at various values of ballast isolation resistances and for all frequencies used in track circuits. The relative error determination of train coordinate by input impedance caused by variation of the ballast isolation resistance for the coded track circuits was investigated. The values of relative error determination of train coordinate can achieve up to 40-50 % and these facts do not allow using this method directly for coded track circuits. For short audio frequency track circuits on frequencies of continuous cab signaling (25, 50 Hz the relative error does not exceed acceptable values, this allow using the examined method for determination of train location on the approach section to railway crossing. Originality. The developed mathematical model allowed determination of the error dependence of train coordinate by using input impedance of the track circuit for coded and audio-frequency track circuits at various frequencies of the signal current and at different ballast isolation resistances. Practical value. The authors proposethe method for train location determination on approach section to the crossing, equipped with audio-frequency track circuits, which is a combination of discrete and continuous monitoring of the train location.

  18. Alaska Case Study: Scientists Venturing Into Field with Journalists Improves Accuracy

    Science.gov (United States)

    Ekwurzel, B.; Detjen, J.; Hayes, R.; Nurnberger, L.; Pavangadkar, A.; Poulson, D.

    2008-12-01

    Issues such as climate change, stem cell research, public health vaccination, etc., can be fraught with public misunderstanding, myths, as well as deliberate distortions of the fundamental science. Journalists are adept at creating print, radio, and video content that can be both compelling and informative to the public. Yet most scientists have little time or training to devote to developing media content for the public and spend little time with journalists who cover science stories. We conducted a case study to examine whether the time and funding invested in exposing journalists to scientists in the field over several days would improve accuracy of media stories about complex scientific topics. Twelve journalists were selected from the 70 who applied for a four-day environmental journalism fellowship in Alaska. The final group achieved the goal of a broad geographic spectrum of the media outlets (small regional to large national organizations), medium (print, radio, online), and experience (early career to senior producers). Reporters met with a diverse group of scientists. The lessons learned and successful techniques will be presented. Initial results demonstrate that stories were highly accurate and rich with audio or visual content for lay audiences. The journalists have also maintained contact with the scientists, asking for leads on emerging stories and seeking new experts that can assist in their reporting. Science-based institutions should devote more funding to foster direct journalist-scientist interactions in the lab and field. These positive goals can be achieved: (1) more accurate dissemination of science information to the public; (2) a broader portion of the scientific community will become a resource to journalists instead of the same eloquent few in the community; (3) scientists will appreciate the skill and pressures of those who survive the media downsizing and provide media savvy content; and (4) the public may incorporate science evidence

  19. Improvement of brain segmentation accuracy by optimizing non-uniformity correction using N3.

    Science.gov (United States)

    Zheng, Weili; Chee, Michael W L; Zagorodnov, Vitali

    2009-10-15

    Smoothly varying and multiplicative intensity variations within MR images that are artifactual, can reduce the accuracy of automated brain segmentation. Fortunately, these can be corrected. Among existing correction approaches, the nonparametric non-uniformity intensity normalization method N3 (Sled, J.G., Zijdenbos, A.P., Evans, A.C., 1998. Nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Trans. Med. Imag. 17, 87-97.) is one of the most frequently used. However, at least one recent study (Boyes, R.G., Gunter, J.L., Frost, C., Janke, A.L., Yeatman, T., Hill, D.L.G., Bernstein, M.A., Thompson, P.M., Weiner, M.W., Schuff, N., Alexander, G.E., Killiany, R.J., DeCarli, C., Jack, C.R., Fox, N.C., 2008. Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils. NeuroImage 39, 1752-1762.) suggests that its performance on 3 T scanners with multichannel phased-array receiver coils can be improved by optimizing a parameter that controls the smoothness of the estimated bias field. The present study not only confirms this finding, but additionally demonstrates the benefit of reducing the relevant parameter values to 30-50 mm (default value is 200 mm), on white matter surface estimation as well as the measurement of cortical and subcortical structures using FreeSurfer (Martinos Imaging Centre, Boston, MA). This finding can help enhance precision in studies where estimation of cerebral cortex thickness is critical for making inferences. PMID:19559796

  20. Empowering cash managers to achieve cost savings by improving predictive accuracy

    OpenAIRE

    Salas-Molina, Francisco; Martin, Francisco J.; Rodríguez-Aguilar, Juan A.; Serrà, Joan; Arcos, Josep Ll.

    2016-01-01

    Cash management is concerned with optimizing the short-term funding requirements of a company. To this end, different optimization strategies have been proposed to minimize costs using daily cash flow forecasts as the main input to the models. However, the effect of the accuracy of such forecasts on cash management policies has not been studied. In this article, using two real data sets from the textile industry, we show that predictive accuracy is highly correlated with cost savings when usi...

  1. Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors

    OpenAIRE

    OLABI, Adel; Damak, Mohamed; BEAREE, Richard; Gibaru, Olivier; LELEU, Stéphane

    2012-01-01

    International audience The use of industrial robots in many fields of industry like prototyping, pre-machining and end milling is limited because of their poor accuracy. Robot joints are mainly responsible for this poor accuracy. The flexibility of robots joints and the kinematic errors in the transmission systems produce a significant error of position in the level of the end-effector. This paper presents these two types of joint errors. Identification methods are presented with experimen...

  2. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Science.gov (United States)

    Monaghan, Kieran A

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an

  3. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    Science.gov (United States)

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  4. Two-orders of magnitude improvement detection limit of lateral flow assays using isotachophoresis

    CERN Document Server

    Moghadam, Babak Y; Posner, Jonathan D

    2014-01-01

    Lateral flow (LF) immunoassays are one of the most prevalent point-of-care (POC) diagnostics due to their simplicity, low cost, and robust operation. A common criticism of LF tests is that they have poor detection limits compared to analytical techniques, like ELISA, which confines their application as a diagnostic tool. The low detection limit of LF assays and associated long equilibration times is due to kinetically limited surface reactions that result from low target concentrations. Here we use isotachophoresis (ITP), a powerful electrokinetic preconcentration and separation technique, to focus target analytes into a thin band and transport them to the LF capture line resulting is a dramatic increase in the surface reaction rate and equilibrium binding. We show that ITP is able to improve limit of detection (LOD) of LF assays by 400-fold for 90 second assay time and by 160-fold for a longer 5 minutes time scale. ITP-enhanced LF (ITP-LF) also shows up to 30% target extraction from 100 uL of the sample, whi...

  5. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    Science.gov (United States)

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model. PMID:22438734

  6. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    Directory of Open Access Journals (Sweden)

    Weizheng Yuan

    2012-02-01

    Full Text Available This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  7. The diagnostic accuracy of the MTBDRplus and MTBDRsl assays for drug-resistant TB detection when performed on sputum and culture isolates.

    Science.gov (United States)

    Tomasicchio, Michele; Theron, Grant; Pietersen, Elize; Streicher, Elizabeth; Stanley-Josephs, Danielle; van Helden, Paul; Warren, Rob; Dheda, Keertan

    2016-01-01

    Although molecular tests for drug-resistant TB perform well on culture isolates, their accuracy using clinical samples, particularly from TB and HIV-endemic settings, requires clarification. The MTBDRplus and MTBDRsl line probe assays were evaluated in 181 sputum samples and 270 isolates from patients with culture-confirmed drug-sensitive-TB, MDR-TB, or XDR-TB. Phenotypic culture-based testing was the reference standard. Using sputum, the sensitivities for resistance was 97.7%, 95.4%, 58.9%, 61.6% for rifampicin, isoniazid, ofloxacin, and amikacin, respectively, whereas the specificities were 91.8%, 89%, 100%, and 100%, respectively. MTBDRsl sensitivity differed in smear-positive vs. smear-negative samples (79.2% vs. 20%, p HIV status. If used sequentially, MTBDRplus and MTBDRsl could rule-in XDR-TB in 78.5% (22/28) and 10.5% (2/19) of smear-positive and smear-negative samples, respectively. On culture isolates, the sensitivity for resistance to rifampicin, isoniazid, ofloxacin, and amikacin was 95.1%, 96.1%, 72.3% and 76.6%, respectively, whereas the specificities exceeded 96%. Using a sequential testing approach, rapid sputum-based diagnosis of fluoroquinolone or aminoglycoside-resistant TB is feasible only in smear-positive samples, where rule-in value is good. Further investigation is required in samples that test susceptible in order to rule-out second-line drug resistance. PMID:26860462

  8. Information transmission via movement behaviour improves decision accuracy in human groups

    NARCIS (Netherlands)

    Clément, Romain J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, Ralf H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate information c

  9. Information transmission via movement behaviour improves decision accuracy in human groups

    NARCIS (Netherlands)

    Clément, R.J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, R.H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate informatio

  10. Improved mass spectrometry assay for plasma hepcidin: detection and characterization of a novel hepcidin isoform.

    Directory of Open Access Journals (Sweden)

    Coby M M Laarakkers

    Full Text Available Mass spectrometry (MS-based assays for the quantification of the iron regulatory hormone hepcidin are pivotal to discriminate between the bioactive 25-amino acid form that can effectively block the sole iron transporter ferroportin and other naturally occurring smaller isoforms without a known role in iron metabolism. Here we describe the design, validation and use of a novel stable hepcidin-25(+40 isotope as internal standard for quantification. Importantly, the relative large mass shift of 40 Da makes this isotope also suitable for easy-to-use medium resolution linear time-of-flight (TOF platforms. As expected, implementation of hepcidin-25(+40 as internal standard in our weak cation exchange (WCX TOF MS method yielded very low inter/intra run coefficients of variation. Surprisingly, however, in samples from kidney disease patients, we detected a novel peak (m/z 2673.9 with low intensity that could be identified as hepcidin-24 and had previously remained unnoticed due to peak interference with the formerly used internal standard. Using a cell-based bioassay it was shown that synthetic hepcidin-24 was, like the -22 and -20 isoforms, a significantly less potent inducer of ferroportin degradation than hepcidin-25. During prolonged storage of plasma at room temperature, we observed that a decrease in plasma hepcidin-25 was paralleled by an increase in the levels of the hepcidin-24, -22 and -20 isoforms. This provides first evidence that all determinants for the conversion of hepcidin-25 to smaller inactive isoforms are present in the circulation, which may contribute to the functional suppression of hepcidin-25, that is significantly elevated in patients with renal impairment. The present update of our hepcidin TOF MS assay together with improved insights in the source and preparation of the internal standard, and sample stability will further improve our understanding of circulating hepcidin and pave the way towards further optimization and

  11. Improved detection of Tritrichomonas foetus in bovine diagnostic specimens using a novel probe-based real time PCR assay.

    Science.gov (United States)

    McMillen, Lyle; Lew, Ala E

    2006-11-01

    A Tritrichomonas foetus-specific 5' Taq nuclease assay using a 3' minor groove binder-DNA probe (TaqMan MGB) targeting conserved regions of the internal transcribed spacer-1 (ITS-1) was developed and compared to established diagnostic procedures. Specificity of the assay was evaluated using bovine venereal microflora and a range of related trichomonad species. Assay sensitivity was evaluated with log(10) dilutions of known numbers of cells, and compared to that for microscopy following culture (InPouch TF test kit) and the conventional TFR3-TFR4 PCR assay. The 5' Taq nuclease assay detected a single cell per assay from smegma or mucus which was 2500-fold or 250-fold more sensitive than microscopy following selective culture from smegma or mucus respectively, and 500-fold more sensitive than culture followed by conventional PCR assay. The sensitivity of the conventional PCR assay was comparable to the 5' Taq nuclease assay when testing purified DNA extracted from clinical specimens, whereas the 5' Taq nuclease assay sensitivity improved using crude cell lysates, which were not suitable as template for the conventional PCR assay. Urine was evaluated as a diagnostic specimen providing improved and equivalent levels of T. foetus detection in spiked urine by both microscopy following culture and direct 5' Taq nuclease detection, respectively, compared with smegma and mucus, however inconclusive results were obtained with urine samples from the field study. Diagnostic specimens (n=159) were collected from herds with culture positive animals and of the 14 animals positive by 5' Taq nuclease assay, 3 were confirmed by selective culture/microscopy detection (Fisher's exact test P<0.001). The 5' Taq nuclease assay described here demonstrated superior sensitivity to traditional culture/microscopy and offers advantages over the application of conventional PCR for the detection of T. foetus in clinical samples. PMID:16860481

  12. Colony color assay coupled with 5FOA negative selection greatly improves yeast threehybrid library screening efficiency

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The recently developed yeast three-hybrid system is a powerful tool for analyzing RNA-protein interactions in vivo. However, large numbers of false positives are frequently met due to bait RNA-independent activation of the reporter gene in the library screening using this system. In this report, we coupled the colony color assay with the 5FOA (5-fluoroorotic acid) negative selection in the library screening, and found that this coupled method effectively eliminated bait RNA-independent false positives and hence greatly improved library screening efficiency. We used this method successfully in isolation of cDNA of an RNA-binding protein that might play important roles in certain cellular process. This improvement will facilitate the use of the yeast three-hybrid system in analyzing RNA-protein interaction.

  13. COARSE-MESH-ACCURACY IMPROVEMENT OF BILINEAR Q4-PLANE ELEMENT BY THE COMBINED HYBRID FINITE ELEMENT METHOD

    Institute of Scientific and Technical Information of China (English)

    谢小平; 周天孝

    2003-01-01

    The combined hybrid finite element method is of an intrinsic mechanism of enhancing coarse-mesh-accuracy of lower order displacement schemes. It was confirmed that the combined hybrid scheme without energy error leads to enhancement of accuracy at coarse meshes, and that the combination parameter plays an important role in the enhancement. As an improvement of conforming bilinear Q4-plane element, the combined hybrid method adopted the most convenient quadrilateral displacements-stress mode, i. e.,the mode of compatible isoparametric bilinear displacements and pure constant stresses. By adjusting the combined parameter, the optimized version of the combined hybrid element was obtained and numerical tests indicated that this parameter-adjusted version behaves much better than Q4-element and is of high accuracy at coarse meshes. Due to elimination of stress parameters at the elemental level, this combined hybrid version is of the same computational cost as that of Q4 -element.

  14. Power outage estimation for tropical cyclones: improved accuracy with simpler models.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth; Quiring, Steven M

    2014-06-01

    In this article, we discuss an outage-forecasting model that we have developed. This model uses very few input variables to estimate hurricane-induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.

  15. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  16. Proposed Technique to Improve VANET’s Vehicle Localization Accuracy in Multipath Environment

    OpenAIRE

    Mr.Ashitosh A. Salunkhe*1; Mrs.Sunita S. Shinde2

    2014-01-01

    Localization (location estimation) of a vehicle in Vehicular Ad-hoc Network (VANET) has been studied in many fields since it has the ability to provide a variety of services like navigation, vehicle tracking and collision detection etc. Global Positioning System (GPS) and Inertial Navigation System (INS) both are very useful method of localization. By using Kalman Filter it is possible to combine these two systems to get better accuracy of localization. Now day’s typical local...

  17. Use of the correlation coefficient between plots in order to improve the accuracy of forest inventories

    OpenAIRE

    Daniela Cunha da Sé; José Márcio de Mello; João Domingos Scalon; Joel Augusto Muniz; Marcelo Silva de Oliveira; José Roberto Soares Scolforo

    2013-01-01

    Forest inventories are usually compiled without taking into account the existing correlations between sampling units, which is debatable particularly where the calculations involve environmental variables. When the potential correlations between sampling units are overlooked, the accuracy of such inventories becomes distorted in terms of the confidence interval range for the variable of interest, which is volume in cubic meters. The magnitude and form of such distortion will vary according to...

  18. Using super-resolution images to improve the measurement accuracy of DIC

    OpenAIRE

    Wang, Yueqi; Lava, Pascal; Debruyne, Dimitri

    2015-01-01

    DIC measurements highly depend on the intensity interpolation of images to achieve subpixel accuracies. The intensity interpolation at subpixel positions is based on the grey levels sampled at integer pixels. Therefore, the sampling rate is crucial to the interpolated intensities. The sampling rate is restricted by the camera resolution. With insufficient resolution, the interpolated intensities at subpixel positions evidently differ from the reality, and significantly degrade the measurement...

  19. Improving Inverse Dynamics Accuracy in a Planar Walking Model Based on Stable Reference Point

    OpenAIRE

    Alaa Abdulrahman; Kamran Iqbal; Gannon White

    2014-01-01

    Physiologically and biomechanically, the human body represents a complicated system with an abundance of degrees of freedom (DOF). When developing mathematical representations of the body, a researcher has to decide on how many of those DOF to include in the model. Though accuracy can be enhanced at the cost of complexity by including more DOF, their necessity must be rigorously examined. In this study a planar seven-segment human body walking model with single DOF joints was developed. A ref...

  20. Ways to help Chinese Students in Senior High School improve language accuracy in writing

    Institute of Scientific and Technical Information of China (English)

    潘惠红

    2015-01-01

    <正>Introduction In Chinese ELT(English language teaching),as in other countries,both fluency and accuracy are considered important either in the teaching or assessment of writing.In this respect,the last decade has seen reforms in the College Entrance Examination in Guangdong Province.With two writing tasks being set as assessment,task one requires students to summarise Chinese language information into five English sentences while the

  1. Travel-time source-specific station correction improves location accuracy

    Science.gov (United States)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  2. Improved accuracy for finite element structural analysis via an integrated force method

    Science.gov (United States)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  3. Improving the accuracy of myocardial perfusion scintigraphy results by machine learning method

    International Nuclear Information System (INIS)

    Full text: Machine learning (ML) as rapidly growing artificial intelligence subfield has already proven in last decade to be a useful tool in many fields of decision making, also in some fields of medicine. Its decision accuracy usually exceeds the human one. To assess applicability of ML in interpretation the results of stress myocardial perfusion scintigraphy for CAD diagnosis. The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy for the investigation was computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate of whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy for scintigraphy were expressed in this way. The results of both decision procedures were compared. With ML method 19 patients more out of 327 (5.8 %) were correctly diagnosed by stress myocardial perfusion scintigraphy. ML could be an important tool for decision making in myocardial perfusion scintigraphy. (author)

  4. An improved assay for the determination of Huntington`s disease allele size

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, C.; Klinger, K.; Miller, G. [Intergrated Genetics, Framingham, MA (United States)

    1994-09-01

    The hallmark of Huntington`s disease (HD) is the expansion of a polymorphic (CAG)n repeat. Several methods have been published describing PCR amplification of this region. Most of these assays require a complex PCR reaction mixture to amplify this GC-rich region. A consistent problem with trinucleotide repeat PCR amplification is the presence of a number of {open_quotes}stutter bands{close_quotes} which may be caused by primer or amplicon slippage during amplification or insufficient polymerase processivity. Most assays for HD arbitrarily select a particular band for diagnostic purposes. Without a clear choice for band selection such an arbitrary selection may result in inconsistent intra- or inter-laboratory findings. We present an improved protocol for the amplification of the HD trinucleotide repeat region. This method simplifies the PCR reaction buffer and results in a set of easily identifiable bands from which to determine allele size. HD alleles were identified by selecting bands of clearly greater signal intensity. Stutter banding was much reduced thus permitting easy identification of the most relevant PCR product. A second set of primers internal to the CCG polymorphism was used in selected samples to confirm allele size. The mechanism of action of N,N,N trimethylglycine in the PCR reaction is not clear. It may be possible that the minimal isostabilizing effect of N,N,N trimethylglycine at 2.5 M is significant enough to affect primer specificity. The use of N,N,N trimethylglycine in the PCR reaction facilitated identification of HD alleles and may be appropriate for use in other assays of this type.

  5. Simulated single-cycle kinetics improves the design of surface plasmon resonance assays.

    Science.gov (United States)

    Palau, William; Di Primo, Carmelo

    2013-09-30

    Instruments based on the surface plasmon resonance (SPR) principle are widely used to monitor in real time molecular interactions between a partner, immobilized on a sensor chip surface and another one injected in a continuous flow of buffer. In a classical SPR experiment, several cycles of binding and regeneration of the surface are performed in order to determine the rate and the equilibrium constants of the reaction. In 2006, Karlsson and co-workers introduced a new method named single-cycle kinetics (SCK) to perform SPR assays. The method consists in injecting sequentially increasing concentrations of the partner in solution, with only one regeneration step performed at the end of the complete binding cycle. A 10 base-pair DNA duplex was characterized kinetically to show how simulated sensorgrams generated by the BiaEvaluation software provided by Biacore™ could really improve the design of SPR assays performed with the SCK method. The DNA duplex was investigated at three temperatures, 10, 20 and 30 °C, to analyze fast and slow rate constants. The results show that after a short obligatory preliminary experiment, simulations provide users with the best experimental conditions to be used, in particular, the maximum concentration used to reach saturation, the dilution factor for the serial dilutions of the sample injected and the duration of the dissociation and association phases. The use of simulated single-cycle kinetics saves time and reduces sample consumption. Simulations can also be used to design SPR experiments with ternary complexes.

  6. IMPROVE THE ZY-3 HEIGHT ACCURACY USING ICESAT/GLAS LASER ALTIMETER DATA

    OpenAIRE

    LI, GUOYUAN; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-01-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and...

  7. Accuracy improvement in the isochronous mass measurement using a cavity doublet

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X., E-mail: x.chen@gsi.de; Sanjari, M. S. [GSI Helmholtzzentrum für Schwerionenforschung (Germany); Piotrowski, J. [AGH University of Science and Technology (Poland); Hülsmann, P.; Litvinov, Yu. A.; Nolden, F.; Steck, M.; Stöhlker, Th. [GSI Helmholtzzentrum für Schwerionenforschung (Germany)

    2015-11-15

    The accuracy of the isochronous mass measurement in a storage ring is subject to the isochronous condition γ=γ{sub t}. It is obvious that this condition cannot be fulfilled for all kinds of nuclides since their velocities certainly differ from each other. However, the non-isochronicity effect can be corrected for by additionally measuring transverse positions of charged particles in the ring. To this end, we outline in this paper the correction method with an arrangement of a cavity doublet, which consists of a position cavity and a reference cavity.

  8. Evaluation of an improved orthognathic articulator system: 1. Accuracy of cast orientation.

    Science.gov (United States)

    Paul, P E; Barbenel, J C; Walker, F S; Khambay, B S; Moos, K F; Ayoub, A F

    2012-02-01

    A systematic study was carried out using plastic model skulls to quantify the accuracy of the transfer of face bow registration to the articulator. A standard Dentatus semi-adjustable articulator system was compared to a purpose built orthognathic articulator system by measuring the maxillary occlusal plane angles of plastic model skulls and of dental casts mounted on the two different types of articulators. There was a statistically significant difference between the two systems; the orthognathic system showed small random errors, but the standard system showed systematic errors of up to 28°.

  9. Use of the Isabel Decision Support System to Improve Diagnostic Accuracy of Pediatric Nurse Practitioner and Family Nurse Practitioner Students

    OpenAIRE

    John, Rita Marie; Hall, Elizabeth; Bakken, Suzanne

    2012-01-01

    Patient safety is a priority for healthcare today. Despite a large proportion of malpractice claims the result of diagnostic error, the use of diagnostic decision support to improve diagnostic accuracy has not been widely used among healthcare professionals. Moreover, while the use of diagnostic decision support has been studied in attending physicians, residents, medical students and advanced practice nurses, the use of decision support among Advanced Practice Nurse (APN) students has not be...

  10. Validation of a primer optimisation matrix to improve the performance of reverse transcription – quantitative real-time PCR assays

    Directory of Open Access Journals (Sweden)

    Dobrovic Alexander

    2009-06-01

    Full Text Available Abstract Background The development of reverse transcription – quantitative real-time PCR (RT-qPCR platforms that can simultaneously measure the expression of multiple genes is dependent on robust assays that function under identical thermal cycling conditions. The use of a primer optimisation matrix to improve the performance of RT-qPCR assays is often recommended in technical bulletins and manuals. Despite this recommendation, a comprehensive introduction to and evaluation of this approach has been absent from the literature. Therefore, we investigated the impact of varying the primer concentration, leaving all the other reaction conditions unchanged, on a large number of RT-qPCR assays which in this case were designed to be monitored using hydrolysis probes from the Universal Probe Library (UPL library. Findings Optimal RT-qPCR conditions were determined for 60 newly designed assays. The calculated Cq (Quantification Cycle difference, non-specific amplification, and primer dimer formation for a given assay was often dependent on primer concentration. The chosen conditions were further optimised by testing two different probe concentrations. Varying the primer concentrations had a greater effect on the performance of a RT-qPCR assay than varying the probe concentrations. Conclusion Primer optimisation is important for improving the performance of RT-qPCR assays monitored by UPL probes. This approach would also be beneficial to the performance of other RT-qPCR assays such as those using other types of probes or fluorescent intercalating dyes.

  11. Improvements are needed in reporting of accuracy studies for diagnostic tests used for detection of finfish pathogens.

    Science.gov (United States)

    Gardner, Ian A; Burnley, Timothy; Caraguel, Charles

    2014-12-01

    Indices of test accuracy, such as diagnostic sensitivity and specificity, are important considerations in test selection for a defined purpose (e.g., screening or confirmation) and affect the interpretation of test results. Many biomedical journals recommend that authors clearly and transparently report test accuracy studies following the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines ( www.stard-statement.org ). This allows readers to evaluate overall study validity and assess potential bias in diagnostic sensitivity and specificity estimates. The purpose of the present study was to evaluate the reporting quality of studies evaluating test accuracy for finfish diseases using the 25 items in the STARD checklist. Based on a database search, 11 studies that included estimates of diagnostic accuracy were identified for independent evaluation by three reviewers. For each study, STARD checklist items were scored as "yes," "no," or "not applicable." Only 10 of the 25 items were consistently reported in most (≥80%) papers, and reporting of the other items was highly variable (mostly between 30% and 60%). Three items ("number, training, and expertise of readers and testers"; "time interval between index tests and reference standard"; and "handling of indeterminate results, missing data, and outliers of the index tests") were reported in less than 10% of papers. Two items ("time interval between index tests and reference standard" and "adverse effects from testing") were considered minimally relevant to fish health because test samples usually are collected postmortem. Modification of STARD to fit finfish studies should increase use by authors and thereby improve the overall reporting quality regardless of how the study was designed. Furthermore, the use of STARD may lead to the improved design of future studies.

  12. Improvements are needed in reporting of accuracy studies for diagnostic tests used for detection of finfish pathogens.

    Science.gov (United States)

    Gardner, Ian A; Burnley, Timothy; Caraguel, Charles

    2014-12-01

    Indices of test accuracy, such as diagnostic sensitivity and specificity, are important considerations in test selection for a defined purpose (e.g., screening or confirmation) and affect the interpretation of test results. Many biomedical journals recommend that authors clearly and transparently report test accuracy studies following the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines ( www.stard-statement.org ). This allows readers to evaluate overall study validity and assess potential bias in diagnostic sensitivity and specificity estimates. The purpose of the present study was to evaluate the reporting quality of studies evaluating test accuracy for finfish diseases using the 25 items in the STARD checklist. Based on a database search, 11 studies that included estimates of diagnostic accuracy were identified for independent evaluation by three reviewers. For each study, STARD checklist items were scored as "yes," "no," or "not applicable." Only 10 of the 25 items were consistently reported in most (≥80%) papers, and reporting of the other items was highly variable (mostly between 30% and 60%). Three items ("number, training, and expertise of readers and testers"; "time interval between index tests and reference standard"; and "handling of indeterminate results, missing data, and outliers of the index tests") were reported in less than 10% of papers. Two items ("time interval between index tests and reference standard" and "adverse effects from testing") were considered minimally relevant to fish health because test samples usually are collected postmortem. Modification of STARD to fit finfish studies should increase use by authors and thereby improve the overall reporting quality regardless of how the study was designed. Furthermore, the use of STARD may lead to the improved design of future studies. PMID:25252270

  13. A sun-tracking method to improve the pointing accuracy of weather radar

    Directory of Open Access Journals (Sweden)

    X. Muth

    2011-08-01

    Full Text Available Accurate positioning of data collected by a weather radar is of primary importance for their appropriate georeferencing, which in turn makes it possible to combine those with additional sources of information (topography, land cover maps, meteorological simulations from numerical weather models to list a few. This issue is especially acute for mobile radar systems, for which accurate and stable levelling might be difficult to ensure. The sun is a source of microwave radiation, which can be detected by weather radars and used for the accurate positioning of the radar data. This paper presents a technique based on the sun echoes to quantify and hence correct for the instrumental errors which can affect the pointing accuracy of radar antenna. The proposed method is applied to data collected in the Swiss Alps using a mobile X-band radar system. The obtained instrumental bias values are evaluated by comparing the locations of the ground echoes predicted using these bias estimates with the observed ground echo locations. The very good agreement between the two confirms the good accuracy of the proposed method.

  14. PCA3 and PCA3-Based Nomograms Improve Diagnostic Accuracy in Patients Undergoing First Prostate Biopsy

    Directory of Open Access Journals (Sweden)

    Virginie Vlaeminck-Guillem

    2013-08-01

    Full Text Available While now recognized as an aid to predict repeat prostate biopsy outcome, the urinary PCA3 (prostate cancer gene 3 test has also been recently advocated to predict initial biopsy results. The objective is to evaluate the performance of the PCA3 test in predicting results of initial prostate biopsies and to determine whether its incorporation into specific nomograms reinforces its diagnostic value. A prospective study included 601 consecutive patients addressed for initial prostate biopsy. The PCA3 test was performed before ≥12-core initial prostate biopsy, along with standard risk factor assessment. Diagnostic performance of the PCA3 test was evaluated. The three available nomograms (Hansen’s and Chun’s nomograms, as well as the updated Prostate Cancer Prevention Trial risk calculator; PCPT were applied to the cohort, and their predictive accuracies were assessed in terms of biopsy outcome: the presence of any prostate cancer (PCa and high-grade prostate cancer (HGPCa. The PCA3 score provided significant predictive accuracy. While the PCPT risk calculator appeared less accurate; both Chun’s and Hansen’s nomograms provided good calibration and high net benefit on decision curve analyses. When applying nomogram-derived PCa probability thresholds ≤30%, ≤6% of HGPCa would have been missed, while avoiding up to 48% of unnecessary biopsies. The urinary PCA3 test and PCA3-incorporating nomograms can be considered as reliable tools to aid in the initial biopsy decision.

  15. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    Science.gov (United States)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  16. Quantification of terrestrial laser scanner (TLS) elevation accuracy in oil palm plantation for IFSAR improvement

    Science.gov (United States)

    Muhadi, N. A.; Abdullah, A. F.; Kassim, M. S. M.

    2016-06-01

    In order to ensure the oil palm productivity is high, plantation site should be chosen wisely. Slope is one of the essential factors that need to be taken into consideration when doing a site selection. High quality of plantation area map with elevation information is needed for decision-making especially when dealing with hilly and steep area. Therefore, accurate digital elevation models (DEMs) are required. This research aims to increase the accuracy of Interferometric Synthetic Aperture Radar (IFSAR) by integrating Terrestrial Laser Scanner (TLS) to generate DEMs. However, the focus of this paper is to evaluate the z-value accuracy of TLS data and Real-Time Kinematic GPS (RTK-GPS) as a reference. Besides, this paper studied the importance of filtering process in developing an accurate DEMs. From this study, it has been concluded that the differences of z-values between TLS and IFSAR were small if the points were located on route and when TLS data has been filtered. This paper also concludes that laser scanner (TLS) should be set up on the route to reduce elevation error.

  17. Improvement of registration accuracy of a handheld augmented reality system for urban landscape simulation

    Directory of Open Access Journals (Sweden)

    Tomohiro Fukuda

    2014-12-01

    Full Text Available The need for visual landscape assessment in large-scale projects for the evaluation of the effects of a particular project on the surrounding landscape has grown in recent years. Augmented reality (AR has been considered for use as a landscape simulation system in which a landscape assessment object created by 3D models is included in the present surroundings. With the use of this system, the time and the cost needed to perform a 3DCG modeling of present surroundings, which is a major issue in virtual reality, are drastically reduced. This research presents the development of a 3D map-oriented handheld AR system that achieves geometric consistency using a 3D map to obtain position data instead of GPS, which has low position information accuracy, particularly in urban areas. The new system also features a gyroscope sensor to obtain posture data and a video camera to capture live video of the present surroundings. All these components are mounted in a smartphone and can be used for urban landscape assessment. Registration accuracy is evaluated to simulate an urban landscape from a short- to a long-range scale. The latter involves a distance of approximately 2000 m. The developed AR system enables users to simulate a landscape from multiple and long-distance viewpoints simultaneously and to walk around the viewpoint fields using only a smartphone. This result is the tolerance level of landscape assessment. In conclusion, the proposed method is evaluated as feasible and effective.

  18. Improving Accuracy of Authentication Process via Short Free Text using Bayesian Network

    Directory of Open Access Journals (Sweden)

    Charoon Chantan

    2012-03-01

    Full Text Available The internet security problems are a crucial threat to all users in the cyber world. One of the important problems about internet security concerned with user classification and authentication. However, there are multiple components to classify and authenticate users. The first one is using username/password and the second method is OTP or Token. This paper presents a novel method which cans Classify User via Short-text and IP Model (CUSIM to grant or reject a user in authentication. CUSIM is a Bayesian network model which utilizes the Bayesian Inference to authenticate the user. The objective of this paper is to use the model based on conditional independent with the prior knowledge, i.e. Keystroke dynamics, location used to connect to the internet, and IP address. Finally, a numerical example is provided to illustrate the probability of incorrect authentication and use an algorithm of machine learning to test the efficiency and find out the accuracy, FAR, and FRR. The model results gave better value of accuracy, FAR, and FRR.

  19. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    Science.gov (United States)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  20. Development of an Automated Bone Mineral Density Software Application: Facilitation Radiologic Reporting and Improvement of Accuracy.

    Science.gov (United States)

    Tsai, I-Ta; Tsai, Meng-Yuan; Wu, Ming-Ting; Chen, Clement Kuen-Huang

    2016-06-01

    The conventional method of bone mineral density (BMD) report production by dictation and transcription is time consuming and prone to error. We developed an automated BMD reporting system based on the raw data from a dual energy X-ray absorptiometry (DXA) scanner for facilitating the report generation. The automated BMD reporting system, a web application, digests the DXA's raw data and automatically generates preliminary reports. In Jan. 2014, 500 examinations were randomized into an automatic group (AG) and a manual group (MG), and the speed of report generation was compared. For evaluation of the accuracy and analysis of errors, 5120 examinations during Jan. 2013 and Dec. 2013 were enrolled retrospectively, and the context of automatically generated reports (AR) was compared with the formal manual reports (MR). The average time spent for report generation in AG and in MG was 264 and 1452 s, respectively (p Z scores in AR is 100 %. The overall accuracy of AR and MR is 98.8 and 93.7 %, respectively (p < 0.001). The mis-categorization rate in AR and MR is 0.039 and 0.273 %, respectively (p = 0.0013). Errors occurred in AR and can be grouped into key-in errors by technicians and need for additional judgements. We constructed an efficient and reliable automated BMD reporting system. It facilitates current clinical service and potentially prevents human errors from technicians, transcriptionists, and radiologists.

  1. [Improvement of sensitivity in the second generation HCV core antigen assay by a novel concentration method using polyethylene glycol (PEG)].

    Science.gov (United States)

    Higashimoto, Makiko; Takahashi, Masahiko; Jokyu, Ritsuko; Syundou, Hiromi; Saito, Hidetsugu

    2007-11-01

    A HCV core antigen (Ag) detection assay system, Lumipulse Ortho HCV Ag has been developed and is commercially available in Japan with a lower detection level limit of 50 fmol/l, which is equivalent to 20 KIU/ml in PCR quantitative assay. HCV core Ag assay has an advantage of broader dynamic range compared with PCR assay, however the sensitivity is lower than PCR. We developed a novel HCV core Ag concentration method using polyethylene glycol (PEG), which can improve the sensitivity five times better than the original assay. The reproducibility was examined by consecutive five-time measurement of HCV patients serum, in which the results of HCV core Ag original and concentrated method were 56.8 +/- 8.1 fmol/l (mean +/- SD), CV 14.2% and 322.9 +/- 45.5 fmol/l CV 14.0%, respectively. The assay results of HCV negative samples in original HCV core Ag were all 0.1 fmol/l and the results were same even in the concentration method. The results of concentration method were 5.7 times higher than original assay, which was almost equal to theoretical rate as expected. The assay results of serially diluted samples were also as same as expected data in both original and concentration assay. We confirmed that the sensitivity of HCV core Ag concentration method had almost as same sensitivity as PCR high range assay in the competitive assay study using the serially monitored samples of five HCV patients during interferon therapy. A novel concentration method using PEG in HCV core Ag assay system seems to be useful for assessing and monitoring interferon treatment for HCV.

  2. Improved Accuracy of Density Functional Theory Calculations for CO2 Reduction and Metal-Air Batteries

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    .e. the electrocatalytic reduction of CO2 and metal-air batteries. In theoretical studies of electrocatalytic CO2 reduction, calculated DFT-level enthalpies of reaction for CO2reduction to various products are significantly different from experimental values[1-3]. In theoretical studies of metal-air battery reactions...... errors in DFT-level computational electrocatalytic CO2reduction is hence identified. The new insight adds increased accuracy e.g., for reaction to formic acid, where the experimental enthalpy of reaction is 0.15 eV. Previously, this enthalpy has been calculated without and with correctional approaches......, Nano Lett., 14, 1016 (2014) [6] J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B, 85, 235149 (2012) Figure 1: Calculated enthalpies of reaction from CO2 to CH3OH (x axis) and HCOOH (y axis). Functional variations...

  3. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Science.gov (United States)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  4. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker

    Science.gov (United States)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  5. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    Science.gov (United States)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  6. Shape Optimization of the Turbomachine Channel by a Gradient Method -Accuracy Improvement

    Institute of Scientific and Technical Information of China (English)

    Marek Rabiega

    2003-01-01

    An algorithm of the gradient method of the channel shape optimization has been built on the basis of 3D equations of mass, momentum and energy conservation in the fluid flow. The gradient of the functional that is posed for minimization has been calculated by two methods, via sensitivities and - for comparison - by the finite difference approximation. The equations for sensitivities have been generated through a differentiate-then-discretize approach. The exemplary optimization of the blade shape of the centrifugal compressor wheel has been carried out for the inviscid gas flow governed by Euler equations with a non-uniform mass flow distribution as the inlet boundary condition. Mixing losses have been minimized downstream the outlet of the centrifugal wheel in this exemplary optimization. The results of the optimization problem accomplished by the two above-mentioned methods have been presented. In the case sparse grids have been used, the method with the gradient approximated by finite differences has been found to be more consistent. The discretization accuracy has turned out to be crucial for the consistency of the gradient method via sensitivities.

  7. Improving Inverse Dynamics Accuracy in a Planar Walking Model Based on Stable Reference Point

    Directory of Open Access Journals (Sweden)

    Alaa Abdulrahman

    2014-01-01

    Full Text Available Physiologically and biomechanically, the human body represents a complicated system with an abundance of degrees of freedom (DOF. When developing mathematical representations of the body, a researcher has to decide on how many of those DOF to include in the model. Though accuracy can be enhanced at the cost of complexity by including more DOF, their necessity must be rigorously examined. In this study a planar seven-segment human body walking model with single DOF joints was developed. A reference point was added to the model to track the body’s global position while moving. Due to the kinematic instability of the pelvis, the top of the head was selected as the reference point, which also assimilates the vestibular sensor position. Inverse dynamics methods were used to formulate and solve the equations of motion based on Newton-Euler formulae. The torques and ground reaction forces generated by the planar model during a regular gait cycle were compared with similar results from a more complex three-dimensional OpenSim model with muscles, which resulted in correlation errors in the range of 0.9–0.98. The close comparison between the two torque outputs supports the use of planar models in gait studies.

  8. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    Science.gov (United States)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  9. Electron Microprobe Analysis of Hf in Zircon: Suggestions for Improved Accuracy of a Difficult Measurement

    Science.gov (United States)

    Fournelle, J.; Hanchar, J. M.

    2013-12-01

    It is not commonly recognized as such, but the accurate measurement of Hf in zircon is not a trivial analytical issue. This is important to assess because Hf is often used as an internal standard for trace element analyses of zircon by LA-ICPMS. The issues pertaining to accuracy revolve around: (1) whether the Hf Ma or the La line is used; (2) what accelerating voltage is applied if Zr La is also measured, and (3) what standard for Hf is used. Weidenbach, et al.'s (2004) study of the 91500 zircon demonstrated the spread (in accuracy) of possible EPMA values for six EPMA labs, 2 of which used Hf Ma, 3 used Hf La, and one used Hf Lb, and standards ranged from HfO2, a ZrO2-HfO2 compound, Hf metal, and hafnon. Weidenbach, et al., used the ID-TIMS values as the correct value (0.695 wt.% Hf.), for which not one of the EPMA labs came close to that value (3 were low and 3 were high). Those data suggest: (1) that there is a systematic underestimation error of the 0.695 wt% Hf (ID-TIMS Hf) value if Hf Ma is used; most likely an issue with the matrix correction, as the analytical lines and absorption edges of Zr La, Si Ka and Hf Ma are rather tightly packed in the electromagnetic spectrum. Mass absorption coefficients are easily in error (e.g., Donovan's determination of the MAC of Hf by Si Ka of 5061 differs from the typically used Henke value of 5449 (Donovan et al, 2002); and (2) For utilization of the Hf La line, however, the second order Zr Ka line interferes with Hf La if the accelerating voltage is greater than 17.99 keV. If this higher keV is used and differential mode PHA is applied, only a portion of the interference is removed (e.g., removal of escape peaks), causing an overestimation of Hf content. Unfortunately, it is virtually impossible to apply an interference correction in this case, as it is impossible to locate Hf-free Zr probe standard. We have examined many of the combinations used by those six EPMA labs and concluded that the optimal EPMA is done with Hf

  10. Improved PCR-Based Detection of Soil Transmitted Helminth Infections Using a Next-Generation Sequencing Approach to Assay Design

    Science.gov (United States)

    Pilotte, Nils; Papaiakovou, Marina; Grant, Jessica R.; Bierwert, Lou Ann; Llewellyn, Stacey; McCarthy, James S.; Williams, Steven A.

    2016-01-01

    Background The soil transmitted helminths are a group of parasitic worms responsible for extensive morbidity in many of the world’s most economically depressed locations. With growing emphasis on disease mapping and eradication, the availability of accurate and cost-effective diagnostic measures is of paramount importance to global control and elimination efforts. While real-time PCR-based molecular detection assays have shown great promise, to date, these assays have utilized sub-optimal targets. By performing next-generation sequencing-based repeat analyses, we have identified high copy-number, non-coding DNA sequences from a series of soil transmitted pathogens. We have used these repetitive DNA elements as targets in the development of novel, multi-parallel, PCR-based diagnostic assays. Methodology/Principal Findings Utilizing next-generation sequencing and the Galaxy-based RepeatExplorer web server, we performed repeat DNA analysis on five species of soil transmitted helminths (Necator americanus, Ancylostoma duodenale, Trichuris trichiura, Ascaris lumbricoides, and Strongyloides stercoralis). Employing high copy-number, non-coding repeat DNA sequences as targets, novel real-time PCR assays were designed, and assays were tested against established molecular detection methods. Each assay provided consistent detection of genomic DNA at quantities of 2 fg or less, demonstrated species-specificity, and showed an improved limit of detection over the existing, proven PCR-based assay. Conclusions/Significance The utilization of next-generation sequencing-based repeat DNA analysis methodologies for the identification of molecular diagnostic targets has the ability to improve assay species-specificity and limits of detection. By exploiting such high copy-number repeat sequences, the assays described here will facilitate soil transmitted helminth diagnostic efforts. We recommend similar analyses when designing PCR-based diagnostic tests for the detection of other

  11. An Approach to Improving the Retrieval Accuracy of Oceanic Constituents in Case Ⅱ Waters

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tinglu; Frank Fell

    2004-01-01

    In the present paper, a method is proposed to improve the performance of Artificial Neural Network(ANN)based algorithms for the retrieval of oceanic constituents in Case Ⅱ waters. The ANN-based algorithms have been developed based on a constraint condition, which represents, to a certain degree, the correlation between suspended particulate matter(SPM)and pigment(CHL), coloured dissolved organic matter(CDOM)and CHL, as well as CDOM and SPM, found in Case Ⅱ waters. Compared with the ANN-based algorithm developed without a constraint condition, the performance of ANN-based algorithms developed with a constraint conditions is much better for the retrieval of CHL and CDOM, especially in the case of high noise levels; however, there is not significant improvement for the retrieval of SPM.

  12. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    Science.gov (United States)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  13. Collaboration between radiological technologists (radiographers) and junior doctors during image interpretation improves the accuracy of diagnostic decisions

    International Nuclear Information System (INIS)

    Rationale and Objectives: In Emergency Departments (ED) junior doctors regularly make diagnostic decisions based on radiographic images. This study investigates whether collaboration between junior doctors and radiographers impacts on diagnostic accuracy. Materials and Methods: Research was carried out in the ED of a university teaching hospital and included 10 pairs of participants. Radiographers and junior doctors were shown 42 wrist radiographs and 40 CT Brains and were asked for their level of confidence of the presence or absence of distal radius fractures or fresh intracranial bleeds respectively using ViewDEX software, first working alone and then in pairs. Receiver Operating Characteristic was used to analyze performance. Results were compared using one-way analysis of variance. Results: The results showed statistically significant improvements in the Area Under the Curve (AUC) of the junior doctors when working with the radiographers for both sets of images (wrist and CT) treated as random readers and cases (p ≤ 0.008 and p ≤ 0.0026 respectively). While the radiographers’ results saw no significant changes, their mean Az values did show an increasing trend when working in collaboration. Conclusion: Improvement in performance of junior doctors following collaboration strongly suggests changes in the potential to improve accuracy of patient diagnosis and therefore patient care. Further training for junior doctors in the interpretation of diagnostic images should also be considered. Decision making of junior doctors was positively impacted on after introducing the opinion of a radiographer. Collaboration exceeds the sum of the parts; the two professions are better together.

  14. Integrating machine learning and physician knowledge to improve the accuracy of breast biopsy.

    Science.gov (United States)

    Dutra, I; Nassif, H; Page, D; Shavlik, J; Strigel, R M; Wu, Y; Elezaby, M E; Burnside, E

    2011-01-01

    In this work we show that combining physician rules and machine learned rules may improve the performance of a classifier that predicts whether a breast cancer is missed on percutaneous, image-guided breast core needle biopsy (subsequently referred to as "breast core biopsy"). Specifically, we show how advice in the form of logical rules, derived by a sub-specialty, i.e. fellowship trained breast radiologists (subsequently referred to as "our physicians") can guide the search in an inductive logic programming system, and improve the performance of a learned classifier. Our dataset of 890 consecutive benign breast core biopsy results along with corresponding mammographic findings contains 94 cases that were deemed non-definitive by a multidisciplinary panel of physicians, from which 15 were upgraded to malignant disease at surgery. Our goal is to predict upgrade prospectively and avoid surgery in women who do not have breast cancer. Our results, some of which trended toward significance, show evidence that inductive logic programming may produce better results for this task than traditional propositional algorithms with default parameters. Moreover, we show that adding knowledge from our physicians into the learning process may improve the performance of the learned classifier trained only on data. PMID:22195087

  15. Improving Accuracy and Coverage of Data Mining Systems that are Built from Noisy Datasets: A New Model

    Directory of Open Access Journals (Sweden)

    Luai A. Al Shalabi

    2009-01-01

    Full Text Available Problem statement: Noise within datasets has to be dealt with under most circumstances. This noise includes misclassified data or information as well as missing data or information. Simple human error is considered as misclassification. These errors will decrease the accuracy of the data mining system so it will not be likely to be used. The objective was to propose an effective algorithm to deal with noise which is represented by missing data in datasets. Approach: A model for improving the accuracy and coverage of data mining systems was proposed and the algorithm of this model was constructed. The algorithm was dealing with missing values in datasets. It splits the original dataset into two new datasets; one contains tuples that have no missing values and the other one contains tuples that have missing values. The proposed algorithm was applied to each of the two new datasets. It finds the reduct of each of them and then it merges the new reducts into one new dataset which will be ready for training. Results: The results showed interesting as it increases the accuracy and coverage of the tested dataset compared to the traditional models. Conclusion: The proposed algorithm performs effectively and generates better results than the previous ones.

  16. Study on the improved accuracy of strip profile using numerical formula model in continuous cold rolling with 6-high mill

    International Nuclear Information System (INIS)

    The quality requirements for thickness accuracy in cold rolling continue to become more stringent. In cold rolling mill, it is very important that the rolling force calculation considers rolling conditions. The rolled strip thickness was predicted using calculated rolling force. However, the prediction of strip thickness in cold rolling is very difficult; in particular, for 6-high mill with shifted intermediate roll (IMR), the accuracy of thickness is not good. In this study, to improve the accuracy of rolled strip thickness, the roll gap flattening can be given based on Hertz contact theory, with contact between rolls and the smooth cylindrical rolls for the rolling elastic deformation. Also, the distribution of the roll gap flattening may be calculated using the contact force of unit transverse length. The strip profile at the continuous cold rolling is calculated by using the numerical analysis model considering the initial strip profile before cold rolling. Hence, we propose that the numerical model can predict the rolled strip profile more quickly and accurately and be applicable to the field. The results of the proposed numerical model were verified by FE-simulation and cold rolling experiments of 6-high mill with five stands

  17. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    CERN Document Server

    Sukhovoj, A M; Khitrov, V A

    2001-01-01

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total gamma-ray spectra from the (n,gamma) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade gamma-decay of heavy nuclei.

  18. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    International Nuclear Information System (INIS)

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total γ-ray spectra from the (n,γ) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade γ-decay of heavy nuclei. (author)

  19. Beam steering and coordinate system rotation improves accuracy of ultrasonic measurements of tissue displacement vector and lateral displacement

    Directory of Open Access Journals (Sweden)

    Sumi C

    2011-11-01

    Full Text Available Chikayoshi Sumi1,2, Kento Ichimaru2, Yusuke Shinozuka21Department of Information and Communication Science, 2Department of Electrical and Electronics Engineering, Faculty of Science and Technology, Sophia University, Kioi-cho, Chiyoda-ku, Tokyo, JapanAbstract: With the proper use of beam steering and apodization, a higher resolution lateral echo image is obtained than with conventional imaging. This is achieved by superimposing crossed, steered beams, which is referred to as “lateral modulation” (LM. This type of beamforming achieves almost the same accuracy in lateral displacement measurements as in axial displacement measurements, ie, displacement vector measurements. The steering angle (ASTA can also be used instead of LM, which uses only beams with a steering angle. In this report, the accuracy of the displacement vectors and lateral displacement measurements for LM and ASTA were evaluated using simulations and agar phantom experiments. The parameters used were direction of a displacement vector, steering angles, and rotation angle for the coordinate system. Changes in the steering angle and rotation angle of the coordinate system permit control of frequencies in the respective directions of the coordinate axes. As shown, when performing a simple ASTA for displacement vector measurement, a spectra frequency division should be performed using a previously developed multidimensional autocorrelation or Doppler method instead of block-matching methods. In this version of ASTA, the combination of nonsteering and rotation of the coordinate system is also effective, because the lateral bandwidth does not decrease. In such a case, transmission of a laterally wide wave will also be effective, particularly for three-dimensional measurement/imaging using a two-dimensional array transducer. ASTA can also be used for accurate lateral displacement measurements. Although a proper beam steering and/or a proper coordinate rotation improves the

  20. IMPROVING THE GRAMMATICAL ACCURACY OF THE SPOKEN ENGLISH OF INDONESIAN INTERNATIONAL KINDERGARTEN STUDENTS

    Directory of Open Access Journals (Sweden)

    IMELDA GOZALI

    2014-07-01

    Full Text Available The need to improve the spoken English of kindergarten students in an international preschool in Surabaya prompted this Classroom Action Research (CAR. It involved the implementation of Form-Focused Instruction (FFI strategy coupled with Corrective Feedback (CF in Grammar lessons. Four grammar topics were selected, namely Regular Plural form, Subject Pronoun, Auxiliary Verbs Do/Does, and Irregular Past Tense Verbs as they were deemed to be the morpho-syntax which children acquire early in life based on the order of acquisition in Second Language Acquisition. The results showed that FFI and CF contributed to the improvement of the spoken grammar in varying degrees, depending on the academic performance, personality, and specific linguistic traits of the students. Students with high academic achievement could generally apply the grammar points taught after the FFI lessons in their daily speech. Students who were rather talkative were sensitive to the CF and could provide self-repair when prompted. Those with lower academic performance generally did not benefit much from the FFI lessons nor the CF.

  1. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    Science.gov (United States)

    Starn, J. J.

    2013-12-01

    -flow finite-difference transport simulations (MT3DMS). Results show more accurate simulation of pumping-well BTCs for a given grid cell size when using analytical solutions. The code base is extended to transient flow and BTCs are compared to results from MT3DMS simulations. Results show the particle-based solutions can resolve transient behavior using coarser model grids with far less computational effort than MT3DMS. The effect of simulation accuracy on parameter estimates (porosity) also is investigated. Porosity estimated using more accurate analytical solutions are less biased than in synthetic finite-difference transport simulations, which tend to be biased by coarseness of the grid. Eliminating the bias by using a finer grid comes at the expense of much larger computational effort. Finally, the code base was applied to an actual groundwater-flow model of Salt Lake Valley, Utah. Particle simulations using the Python code base compare well with finite-difference simulations, but with less computational effort, and have the added advantage of delineating flow paths, thus explicitly connecting solute source areas with receptors, and producing complete particle-age distributions. Knowledge of source areas and age distribution greatly enhances the analysis of dissolved solids data in Salt Lake Valley.

  2. Improving ECG classification accuracy using an ensemble of neural network modules.

    Directory of Open Access Journals (Sweden)

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  3. Characterization of Smart Phone Received Signal Strength Indication for WLAN Indoor Positioning Accuracy Improvement

    Directory of Open Access Journals (Sweden)

    Jiayou Luo

    2014-03-01

    Full Text Available Considering that indoor positioning applications based on wireless local area network location fingerprinting would be mainly used on the mobile devices. This paper investigates the differences of received signal strength indication (RSSI between different smart phones and the distributions of RSSI are also analyzed. The statistical analysis of experimental results shows the differences of RSSI between different smart phones are not trivial. Nearly 65% of the RSSI histograms are significantly peaked relative to Gaussian distribution and 65% of them are left-skewed distribution. Therefore, taking skewness and kurtosis coefficients into account, Gaussian distribution is not sufficient to ensure an accurate modeling of the RSSI. The impacts of human behavior on RSSI distribution are explored and two types of human behavior are revealed to be the cause of bi-modal distribution. The statistical data analysis could enable smart phone indoor positioning systems designers to improve positioning performance and to model location fingerprinting based indoor positioning systems.

  4. Improving the accuracy: volatility modeling and forecasting using high-frequency data and the variational component

    Directory of Open Access Journals (Sweden)

    Manish Kumar

    2010-06-01

    Full Text Available In this study, we predict the daily volatility of the S&P CNX NIFTY market index of India using the basic ‘heterogeneous autoregressive’ (HAR and its variant. In doing so, we estimated several HAR and Log form of HAR models using different regressor. The different regressors were obtained by extracting the jump and continuous component and the threshold jump and continuous component from the realized volatility. We also tried to investigate whether dividing volatility into simple and threshold jumps and continuous variation yields a substantial improvement in volatility forecasting or not. The results provide the evidence that inclusion of realized bipower variance in the HAR models helps in predicting future volatility.

  5. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2peak)

    Science.gov (United States)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  6. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    Science.gov (United States)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  7. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Karaiskos, Pantelis, E-mail: pkaraisk@med.uoa.gr [Medical Physics Laboratory, Medical School, University of Athens (Greece); Gamma Knife Department, Hygeia Hospital, Athens (Greece); Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos [Medical Physics Laboratory, Medical School, University of Athens (Greece); Roussakis, Arkadios [CT and MRI Department, Hygeia Hospital, Athens (Greece); Torrens, Michael [Gamma Knife Department, Hygeia Hospital, Athens (Greece); Seimenis, Ioannis [Medical Physics Laboratory, Medical School, Democritus University of Thrace, Alexandroupolis (Greece)

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  8. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    International Nuclear Information System (INIS)

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. (note)

  9. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  10. Improved accuracy of acute graft-versus-host disease staging among multiple centers.

    Science.gov (United States)

    Levine, John E; Hogan, William J; Harris, Andrew C; Litzow, Mark R; Efebera, Yvonne A; Devine, Steven M; Reshef, Ran; Ferrara, James L M

    2014-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD. PMID:25455279

  11. Improving patient care and accuracy of given doses in radiation therapy using in vivo dosimetry verification*

    Institute of Scientific and Technical Information of China (English)

    Ahmed Shawky Shawata; Tarek El Nimr; Khaled M. Elshahat

    2015-01-01

    Objective This work aims to verify and improve the dose given for cancer patients in radiation therapy by using diodes to enhance patient in vivo dosimetry on a routine basis. Some characteristics of two available semi-conductor diode dosimetry systems were evaluated.Methods The diodes had been calibrated to read the dose at Dmax below the surface. Correction factors of clinical relevance were quantified to convert the diode readings into patient dose. The diode was irradiated at various gantry angles (increments of 45°), various Field Sizes and various Source to Surface Distances (SSDs).Results The maximal response variation in the angular response with respect to an arbitrary angle of 0° was 1.9%, and the minimum variation was 0.5%. The response of the diode with respect to various field sizes showed the minimum and the maximum variations in the measured dose from the diode; the calculated doses were -1.6% (for 5 cm x 5 cm field size) and 6.6% (for 40 cm x 40 cm field size). The diode exhibited a significant perturbation in the response, which decreased with increasing SSD. No discrepancies larger than 5% were detected between the expected dose and the measured dose.Conclusion The results indicate that the diodes exhibit excellent linearity, dose reproducibility and minimal anisotropy; that they can be used with confidence for patient dose verification. Furthermore, diodes render real time verification of the dose delivered to patients.

  12. Drift Removal for Improving the Accuracy of Gait Parameters Using Wearable Sensor Systems

    Directory of Open Access Journals (Sweden)

    Ryo Takeda

    2014-12-01

    Full Text Available Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases.

  13. Improvement of the accuracy of the imaging bolometer foil laser calibration

    International Nuclear Information System (INIS)

    An imaging bolometer with a single graphite-coated metal foil is a diagnostic tool for diagnosing plasma radiation from magnetic fusion plasmas. We could obtain the local foil properties (the thermal diffusivity, κ, and the product of the thermal conductivity, k, and the thickness, tf) of the metal imaging bolometer foil by analyzing the calibration data. For improving the IRVB a Tantalum (Ta) foil is offered due to strength, low neutron cross-section, and high sensitivity, however there is a large discrepancy between the value of the foil thickness from the experimental value and the nominal value. For calibrating of the foil the He-Ne laser beam is focused on 63 various locations which are determined by using the marks on the frame. The parameters of the foil are determined by comparing the measured thermal radiation data from an IR camera (FLIR/SC500) (60 Hz, 320x240 pixels, 7.5-13 μm) with the corresponding results of a finite element model. (author)

  14. Analyses to Verify and Improve the Accuracy of the Manufactured Home Energy Audit (MHEA)

    Energy Technology Data Exchange (ETDEWEB)

    Ternes, Mark P [ORNL; Gettings, Michael B [ORNL

    2008-12-01

    A series of analyses were performed to determine the reasons that the Manufactured Home Energy Audit (MHEA) over predicted space-heating energy savings as measured in a recent field test and to develop appropriate corrections to improve its performance. The study used the Home Energy Rating System (HERS) Building Energy Simulation Test (BESTEST) to verify that MHEA accurately calculates the UA-values of mobile home envelope components and space-heating energy loads as compared with other, well-accepted hourly energy simulation programs. The study also used the Procedures for Verification of RESNET Accredited HERS Software Tools to determine that MHEA accurately calculates space-heating energy consumptions for gas furnaces, heat pumps, and electric-resistance furnaces. Even though MHEA's calculations were shown to be correct from an engineering point of view, three modifications to MHEA's algorithms and use of a 0.6 correction factor were incorporated into MHEA to true-up its predicted savings to values measured in a recent field test. A simulated use of the revised version of MHEA in a weatherization program revealed that MHEA would likely still recommend a significant number of cost-effective weatherization measures in mobile homes (including ceiling, floor, and even wall insulation and far fewer storm windows). Based on the findings from this study, it was recommended that a revised version of MHEA with all the changes and modifications outlined in this report should be finalized and made available to the weatherization community as soon as possible, preferably in time for use within the 2009 Program Year.

  15. Accuracy of intermediate dose of furosemide injection to improve multidetector row CT urography

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Catherine [Department of Radiology B, Universitary Hospital of Strasbourg-Civil Hospital, 1, Place de l' hopital BP 426, 67091 Strasbourg Cedex (France)], E-mail: catherine.roy@chru-strasbourg.fr; Jeantroux, Jeremy; Irani, Farah G.; Sauer, Benoit [Department of Radiology B, Universitary Hospital of Strasbourg-Civil Hospital, 1, Place de l' hopital BP 426, 67091 Strasbourg Cedex (France); Lang, Herve; Saussine, Christian [Department of Urology, Universitary Hospital of Strasbourg-Civil Hospital, 1, Place de l' hopital BP 426, 67091 Strasbourg Cedex (France)

    2008-05-15

    Objective: Evaluate the usefulness of intermediate dose furosemide to improve visualization of the intrarenal collecting system and ureter using MDCTU. Materials and methods: Two groups of 100 patients without urinary tract disease or major abdominal pathology underwent MDCTU. Group I (various abdominal indications) was performed without any additional preparation and Group II (suspicion of urinary tract disease) 10 min after injection of furosemide (20 mg). MIP images of the excretory phase were post-processed. Maximal short-axis diameter of the pelvis and ureter were measured on axial images for all phases. Visualization of the collecting system wall and the identification of the whole ureter were assessed. Results: Mean pelvic diameter before contrast was (7.4 mm, S.D. {+-} 2.7; 13.4 mm, S.D. {+-} 4.1), on cortico-medullary phase (8.4 mm, S.D. {+-} 4.2; 14.3 mm, S.D. {+-} 4), on nephrographic phase (8.1 mm, S.D. {+-} 2.5; 14.8 mm, S.D. {+-} 4) and on excretory phase (9.7 mm, S.D. {+-} 3.4; 14.9 mm, S.D. {+-} 4.5), respectively, for Groups I and II. Intrarenal collecting system wall was clearly identified on both corticomedullary and nephrographic phases in 91% of Group II against 20% of Group I. Opacification of the entire ureter was excellent on excretory phase in 96% of Group II against 13% of Group I. The difference between the mean values for the two groups was statistically significant for all phases (p < 10{sup -9}). Conclusion: Intermediate-dose furosemide (20 mg) before MDCTU is a very simple add-on for accurate depiction of pelvicalyceal details and collecting system wall without artefacts. The procedure is associated with a constant and complete visualisation of the entire urete.

  16. Treatment Planning to Improve Delivery Accuracy and Patient Throughput in Helical Tomotherapy

    International Nuclear Information System (INIS)

    Purpose: To investigate delivery quality assurance (DQA) discrepancies observed for a subset of helical tomotherapy patients. Methods and Materials: Six tomotherapy patient plans were selected for analysis. Three had passing DQA ion chamber (IC) measurements, whereas 3 had measurements deviating from the expected dose by more than 3.0%. All plans used similar parameters, including: 2.5 cm field-width, 15-s gantry period, and pitch values ranging from 0.143 to 0.215. Preliminary analysis suggested discrepancies were associated with plans having predominantly small leaf open times (LOTs). To test this, patients with failing DQA measurements were replanned using an increased pitch of 0.287. New DQA plans were generated and IC measurements performed. Exit fluence data were also collected during DQA delivery for dose reconstruction purposes. Results: Sinogram analysis showed increases in mean LOTs ranging from 29.8% to 83.1% for the increased pitch replans. IC measurements for these plans showed a reduction in dose discrepancies, bringing all measurements within ±3.0%. The replans were also more efficient to deliver, resulting in reduced treatment times. Dose reconstruction results were in excellent agreement with IC measurements, illustrating the impact of leaf-timing inaccuracies on plans having predominantly small LOTs. Conclusions: The impact of leaf-timing inaccuracies on plans with small mean LOTs can be considerable. These inaccuracies result from deviations in multileaf collimator latency from the linear approximation used by the treatment planning system and can be important for plans having a 15-s gantry period. The ability to reduce this effect while improving delivery efficiency by increasing the pitch is demonstrated.

  17. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    Science.gov (United States)

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers.

  18. Identifying the procedural gap and improved methods for maintaining accuracy during total hip arthroplasty.

    Science.gov (United States)

    Gross, Allan; Muir, Jeffrey M

    2016-09-01

    Osteoarthritis is a ubiquitous condition, affecting 26 million Americans each year, with up to 17% of adults over age 75 suffering from one variation of arthritis. The hip is one of the most commonly affected joints and while there are conservative options for treatment, as symptoms progress, many patients eventually turn to surgery to manage their pain and dysfunction. Early surgical options such as osteotomy or arthroscopy are reserved for younger, more active patients with less severe disease and symptoms. Total hip arthroplasty offers a viable solution for patients with severe degenerative changes; however, post-surgical discrepancies in leg length, offset and component malposition are common and cause significant complications. Such discrepancies are associated with consequences such as low back pain, neurological deficits, instability and overall patient dissatisfaction. Current methods for managing leg length and offset during hip arthroplasty are either inaccurate and susceptible to error or are cumbersome, expensive and lengthen surgical time. There is currently no viable option that provides accurate, real-time data to surgeons regarding leg length, offset and cup position in a cost-effective manner. As such, we hypothesize that a procedural gap exists in hip arthroplasty, a gap into which fall a large majority of arthroplasty patients who are at increased risk of complications following surgery. These complications and associated treatments place significant stress on the healthcare system. The costs associated with addressing leg length and offset discrepancies can be minor, requiring only heel lifts and short-term rehabilitation, but can also be substantial, with revision hip arthroplasty costs of up to $54,000 per procedure. The need for a cost-effective, simple to use and unobtrusive technology to address this procedural gap in hip arthroplasty and improve patient outcomes is of increasing importance. Given the aging of the population, the projected

  19. Dose metric considerations in in vitro assays to improve quantitative in vitro-in vivo dose extrapolations.

    Science.gov (United States)

    Groothuis, Floris A; Heringa, Minne B; Nicol, Beate; Hermens, Joop L M; Blaauboer, Bas J; Kramer, Nynke I

    2015-06-01

    Challenges to improve toxicological risk assessment to meet the demands of the EU chemical's legislation, REACH, and the EU 7th Amendment of the Cosmetics Directive have accelerated the development of non-animal based methods. Unfortunately, uncertainties remain surrounding the power of alternative methods such as in vitro assays to predict in vivo dose-response relationships, which impedes their use in regulatory toxicology. One issue reviewed here, is the lack of a well-defined dose metric for use in concentration-effect relationships obtained from in vitro cell assays. Traditionally, the nominal concentration has been used to define in vitro concentration-effect relationships. However, chemicals may differentially and non-specifically bind to medium constituents, well plate plastic and cells. They may also evaporate, degrade or be metabolized over the exposure period at different rates. Studies have shown that these processes may reduce the bioavailable and biologically effective dose of test chemicals in in vitro assays to levels far below their nominal concentration. This subsequently hampers the interpretation of in vitro data to predict and compare the true toxic potency of test chemicals. Therefore, this review discusses a number of dose metrics and their dependency on in vitro assay setup. Recommendations are given on when to consider alternative dose metrics instead of nominal concentrations, in order to reduce effect concentration variability between in vitro assays and between in vitro and in vivo assays in toxicology. PMID:23978460

  20. Improving accuracy in shallow-landslide susceptibility analyses at regional scale

    Science.gov (United States)

    Iovine, Giulio G. R.; Rago, Valeria; Frustaci, Francesco; Bruno, Claudia; Giordano, Stefania; Muto, Francesco; Gariano, Stefano L.; Pellegrino, Annamaria D.; Conforti, Massimo; Pascale, Stefania; Distilo, Daniela; Basile, Vincenzo; Soleri, Sergio; Terranova, Oreste G.

    2015-04-01

    Calabria (southern Italy) is particularly exposed to geo-hydrological risk. In the last decades, slope instabilities, mainly related to rainfall-induced landslides, repeatedly affected its territory. Among these, shallow landslides, characterized by abrupt onset and extremely rapid movements, are among the most destructive and dangerous phenomena for people and infrastructures. In this study, a susceptibility analysis to shallow landslides has been performed by refining a method recently applied in Costa Viola - central Calabria (Iovine et al., 2014), and only focusing on landslide source activations (regardless of their possible evolution as debris flows). A multivariate approach has been applied to estimating the presence/absence of sources, based on linear statistical relationships with a set of causal variables. The different classes of numeric causal variables have been determined by means of a data clustering method, designed to determine the best arrangement. A multi-temporal inventory map of sources, mainly obtained from interpretation of air photographs taken in 1954-1955, and in 2000, has been adopted to selecting the training and the validation sets. Due to the wide extend of the territory, the analysis has been iteratively performed by a step-by-step decreasing cell-size approach, by adopting greater spatial resolutions and thematic details (e.g. lithology, land-use, soil, morphometry, rainfall) for high-susceptible sectors. Through a sensitivity analysis, the weight of the considered factors in predisposing shallow landslides has been evaluated. The best set of variables has been identified by iteratively including one variable at a time, and comparing the results in terms of performance. Furthermore, susceptibility evaluations obtained through logistic regression have been compared to those obtained by applying neural networks. Obtained results may be useful to improve land utilization planning, and to select proper mitigation measures in shallow

  1. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    Science.gov (United States)

    Justh, H. L.; Justus, C. G.; Badger, A. M.

    2009-12-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai’s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  2. An improved assay for antibody dependent cellular cytotoxicity based on time resolved fluorometry.

    Science.gov (United States)

    Patel, A K; Boyd, P N

    1995-07-17

    A new and faster assay for antibody dependent cellular cytotoxicity based on release of europium from target cells is described. This has a number of important advantages over the traditional assays based on release of chromium-51 (51Cr). The new method involves labelling of Wein 133 target cells (B cell non-Hodgkin's lymphoma cells) which express the antigen, CDw52, with the chelate europium diethylenetriaminopentaacetic acid (EuDTPA) according to the method of Blomberg et al. (1986). Labelled cells are sensitised (coated) with the anti-lymphocytic monoclonal antibody, Campath-1H. Human peripheral blood mononuclear cells are added to mediate lysis of EuDTPA labelled Wein 133 cells by ADCC. Release of EuDTPA from lysed cells is determined by mixing supernatants with enhancement solution containing 2-naphthoyl trifluoroacetone, 2-NTA, to form a highly fluorescent chelate which is measured using time resolved fluorometry. Results obtained with the new EuDPTA release assays were comparable to traditional assays based on the release of the radioisotope 51Cr. It is anticipated that this assay will have a widespread application among laboratories performing ADCC assays. The method is non-hazardous and has been used routinely for over 2 years to monitor production and purification of Campath-1H. PMID:7622867

  3. Can physiological endpoints improve the sensitivity of assays with plants in the risk assessment of contaminated soils?

    Directory of Open Access Journals (Sweden)

    Ana Gavina

    Full Text Available Site-specific risk assessment of contaminated areas indicates prior areas for intervention, and provides helpful information for risk managers. This study was conducted in the Ervedosa mine area (Bragança, Portugal, where both underground and open pit exploration of tin and arsenic minerals were performed for about one century (1857-1969. We aimed at obtaining ecotoxicological information with terrestrial and aquatic plant species to integrate in the risk assessment of this mine area. Further we also intended to evaluate if the assessment of other parameters, in standard assays with terrestrial plants, can improve the identification of phytotoxic soils. For this purpose, soil samples were collected on 16 sampling sites distributed along four transects, defined within the mine area, and in one reference site. General soil physical and chemical parameters, total and extractable metal contents were analyzed. Assays were performed for soil elutriates and for the whole soil matrix following standard guidelines for growth inhibition assay with Lemna minor and emergence and seedling growth assay with Zea mays. At the end of the Z. mays assay, relative water content, membrane permeability, leaf area, content of photosynthetic pigments (chlorophylls and carotenoids, malondialdehyde levels, proline content, and chlorophyll fluorescence (Fv/Fm and ΦPSII parameters were evaluated. In general, the soils near the exploration area revealed high levels of Al, Mn, Fe and Cu. Almost all the soils from transepts C, D and F presented total concentrations of arsenic well above soils screening benchmark values available. Elutriates of several soils from sampling sites near the exploration and ore treatment areas were toxic to L. minor, suggesting that the retention function of these soils was seriously compromised. In Z. mays assay, plant performance parameters (other than those recommended by standard protocols, allowed the identification of more phytotoxic soils

  4. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    Science.gov (United States)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load. PMID:27370431

  5. Photoplethysmogram intensity ratio: A potential indicator for improving the accuracy of PTT-based cuffless blood pressure estimation.

    Science.gov (United States)

    Ding, Xiao-Rong; Zhang, Yuan-Ting

    2015-01-01

    The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy. PMID:26736283

  6. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points - A Review.

    Science.gov (United States)

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  7. Improved Activity Assay Method for Arginine Kinase Based on a Ternary Heteropolyacid System

    Institute of Scientific and Technical Information of China (English)

    陈宝玉; 郭勤; 郭智; 王希成

    2003-01-01

    This paper presents a new system for the activity assay of arginine kinase (AK), based on the spectrophotometric determination of an ascorbic acid-reduced blue ternary heteropolyacid composed of bismuth, molybdate and the released phosphate from N-phospho-L-arginine (PArg) formed in the forward catalysis reaction.The assay conditions, including the formulation of the phosphate determination reagent (PDR), the assay timing, and the linear activity range of the enzyme concentration, have been tested and optimized.For these conditions, the ternary heteropolyacid color is completely developed within 1 min and is stable for at least 15 min, with an absorbance maximum at 700 nm and a molar extinction coefficient of 15.97 (mmol/L)-1 · cm-1 for the phosphate.Standard curves for phosphate show a good linearity of 0.999.Compared with previous activity assay methods for AK, this system exhibits superior sensitivity, reproducibility, and adaptability to various conditions in enzymological studies.This method also reduces the assay time and avoids the use of some expensive instruments and reagents.

  8. Indexing Large Visual Vocabulary by Randomized Dimensions Hashing for High Quantization Accuracy: Improving the Object Retrieval Quality

    Science.gov (United States)

    Yang, Heng; Wang, Qing; He, Zhoucan

    The bag-of-visual-words approach, inspired by text retrieval methods, has proven successful in achieving high performance in object retrieval on large-scale databases. A key step of these methods is the quantization stage which maps the high-dimensional image feature vectors to discriminatory visual words. In this paper, we consider the quantization step as the nearest neighbor search in large visual vocabulary, and thus proposed a randomized dimensions hashing (RDH) algorithm to efficiently index and search the large visual vocabulary. The experimental results have demonstrated that the proposed algorithm can effectively increase the quantization accuracy compared to the vocabulary tree based methods which represent the state-of-the-art. Consequently, the object retrieval performance can be significantly improved by our method in the large-scale database.

  9. General formula for on-axis sun-tracking system and its application in improving tracking accuracy of solar collector

    Energy Technology Data Exchange (ETDEWEB)

    Chong, K.K.; Wong, C.W. [Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Off Jalan Genting Kelang, Setapak, 53300 Kuala Lumpur (Malaysia)

    2009-03-15

    Azimuth-elevation and tilt-roll tracking mechanism are among the most commonly used sun-tracking methods for aiming the solar collector towards the sun at all times. It has been many decades that each of these two sun-tracking methods has its own specific sun-tracking formula and they are not interrelated. In this paper, the most general form of sun-tracking formula that embraces all the possible on-axis tracking methods is presented. The general sun-tracking formula not only can provide a general mathematical solution, but more significantly it can improve the sun-tracking accuracy by tackling the installation error of the solar collector. (author)

  10. Dynamic sea surface topography, gravity and improved orbit accuracies from the direct evaluation of SEASAT altimeter data

    Science.gov (United States)

    Marsh, J. G.; Lerch, F.; Koblinsky, C. J.; Klosko, S. M.; Robbins, J. W.; Williamson, R. G.; Patel, G. B.

    1989-01-01

    A method for the simultaneous solution of dynamic ocean topography, gravity and orbits using satellite altimeter data is described. A GEM-T1 based gravitational model called PGS-3337 that incorporates Seasat altimetry, surface gravimetry and satellite tracking data has been determined complete to degree and order 50. The altimeter data is utilized as a dynamic observation of the satellite's height above the sea surface with a degree 10 model of dynamic topography being recovered simultaneously with the orbit parameters, gravity and tidal terms in this model. PGS-3337 has a geoid uncertainty of 60 cm root-mean-square (RMS) globally, with the uncertainty over the altimeter tracked ocean being in the 25 cm range. Doppler determined orbits for Seasat, show large improvements, with the sub-30 cm radial accuracies being achieved. When altimeter data is used in orbit determination, radial orbital accuracies of 20 cm are achieved. The RMS of fit to the altimeter data directly gives 30 cm fits for Seasat when using PGS-3337 and its geoid and dynamic topography model. This performance level is two to three times better than that achieved with earlier Goddard earth models (GEM) using the dynamic topography from long-term oceanographic averages. The recovered dynamic topography reveals the global long wavelength circulation of the oceans with a resolution of 1500 km. The power in the dynamic topography recovery is now found to be closer to that of oceanographic studies than for previous satellite solutions. This is attributed primarily to the improved modeling of the geoid which has occurred. Study of the altimeter residuals reveals regions where tidal models are poor and sea state effects are major limitations.

  11. Improved sensitivity of an acid sphingomyelinase activity assay using a C6:0 sphingomyelin substrate.

    Science.gov (United States)

    Chuang, Wei-Lien; Pacheco, Joshua; Cooper, Samantha; Kingsbury, Jonathan S; Hinds, John; Wolf, Pavlina; Oliva, Petra; Keutzer, Joan; Cox, Gerald F; Zhang, Kate

    2015-06-01

    Short-chain C6-sphingomyelin is an artificial substrate that was used in an acid sphingomyelinase activity assay for a pilot screening study of patients with Niemann-Pick disease types A and B. Using previously published multiplex and single assay conditions, normal acid sphingomyelinase activity levels (i.e. false negative results) were observed in two sisters with Niemann-Pick B who were compound heterozygotes for two missense mutations, p.C92W and p.P184L, in the SMPD1 gene. Increasing the sodium taurocholate detergent concentration in the assay buffer lowered the activity levels of these two patients into the range observed with other patients with clear separation from normal controls. PMID:26937397

  12. Improving the accuracy of estimates of animal path and travel distance using GPS drift-corrected dead reckoning.

    Science.gov (United States)

    Dewhirst, Oliver P; Evans, Hannah K; Roskilly, Kyle; Harvey, Richard J; Hubel, Tatjana Y; Wilson, Alan M

    2016-09-01

    Route taken and distance travelled are important parameters for studies of animal locomotion. They are often measured using a collar equipped with GPS. Collar weight restrictions limit battery size, which leads to a compromise between collar operating life and GPS fix rate. In studies that rely on linear interpolation between intermittent GPS fixes, path tortuosity will often lead to inaccurate path and distance travelled estimates. Here, we investigate whether GPS-corrected dead reckoning can improve the accuracy of localization and distance travelled estimates while maximizing collar operating life. Custom-built tracking collars were deployed on nine freely exercising domestic dogs to collect high fix rate GPS data. Simulations were carried out to measure the extent to which combining accelerometer-based speed and magnetometer heading estimates (dead reckoning) with low fix rate GPS drift correction could improve the accuracy of path and distance travelled estimates. In our study, median 2-dimensional root-mean-squared (2D-RMS) position error was between 158 and 463 m (median path length 16.43 km) and distance travelled was underestimated by between 30% and 64% when a GPS position fix was taken every 5 min. Dead reckoning with GPS drift correction (1 GPS fix every 5 min) reduced 2D-RMS position error to between 15 and 38 m and distance travelled to between an underestimation of 2% and an overestimation of 5%. Achieving this accuracy from GPS alone would require approximately 12 fixes every minute and result in a battery life of approximately 11 days; dead reckoning reduces the number of fixes required, enabling a collar life of approximately 10 months. Our results are generally applicable to GPS-based tracking studies of quadrupedal animals and could be applied to studies of energetics, behavioral ecology, and locomotion. This low-cost approach overcomes the limitation of low fix rate GPS and enables the long-term deployment of lightweight GPS collars

  13. Improving the accuracy of estimates of animal path and travel distance using GPS drift-corrected dead reckoning.

    Science.gov (United States)

    Dewhirst, Oliver P; Evans, Hannah K; Roskilly, Kyle; Harvey, Richard J; Hubel, Tatjana Y; Wilson, Alan M

    2016-09-01

    Route taken and distance travelled are important parameters for studies of animal locomotion. They are often measured using a collar equipped with GPS. Collar weight restrictions limit battery size, which leads to a compromise between collar operating life and GPS fix rate. In studies that rely on linear interpolation between intermittent GPS fixes, path tortuosity will often lead to inaccurate path and distance travelled estimates. Here, we investigate whether GPS-corrected dead reckoning can improve the accuracy of localization and distance travelled estimates while maximizing collar operating life. Custom-built tracking collars were deployed on nine freely exercising domestic dogs to collect high fix rate GPS data. Simulations were carried out to measure the extent to which combining accelerometer-based speed and magnetometer heading estimates (dead reckoning) with low fix rate GPS drift correction could improve the accuracy of path and distance travelled estimates. In our study, median 2-dimensional root-mean-squared (2D-RMS) position error was between 158 and 463 m (median path length 16.43 km) and distance travelled was underestimated by between 30% and 64% when a GPS position fix was taken every 5 min. Dead reckoning with GPS drift correction (1 GPS fix every 5 min) reduced 2D-RMS position error to between 15 and 38 m and distance travelled to between an underestimation of 2% and an overestimation of 5%. Achieving this accuracy from GPS alone would require approximately 12 fixes every minute and result in a battery life of approximately 11 days; dead reckoning reduces the number of fixes required, enabling a collar life of approximately 10 months. Our results are generally applicable to GPS-based tracking studies of quadrupedal animals and could be applied to studies of energetics, behavioral ecology, and locomotion. This low-cost approach overcomes the limitation of low fix rate GPS and enables the long-term deployment of lightweight GPS collars.

  14. Application of immunomagnetic particles to enzyme-linked immunosorbent assay (ELISA) for improvement of detection sensitivity of HCG.

    Science.gov (United States)

    Kuo, Hsiao-Ting; Yeh, Jay Z; Wu, Po-Hua; Jiang, Chii-Ming; Wu, Ming-Chang

    2012-01-01

    This investigation was aimed at using superparamagnetic particles to enzyme-linked immunosorbent assay (SPIO-ELISA) of human chorionic gonadotropin (hCG) to enhance detection sensitivity of hCG. We found that N-(3-dimethyl aminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC) was the best cross-linking reagent to link anti hCG α antibody to superparamagnetic particle (SPIO-anti hCG α antibody immunomagnetic particle). To improve the specificity of the assay, a horse radish peroxidase (HRP)-labeled anti-hCG beta monoclonal antibody was used to detect captured hCG using double antibody sandwich ELISA assay. SPIO-ELISA application to determine hCG increased the sensitivity to 1 mIU/mL, which is a level of sensitivity enabling the diagnosis of pregnancy during the early gestational period.

  15. An Improved Neutral a-Glucosidase Assay for Assessment of Epididymal Function—Validation and Comparison to the WHO Method

    Directory of Open Access Journals (Sweden)

    Frank Eertmans

    2014-01-01

    Full Text Available Neutral a-glucosidase (NAG activity in human seminal plasma is an important indicator for epididymis functionality. In the present study, the classic World Health Organization (WHO method has been adapted to enhance assay robustness. Changes include modified enzyme reaction buffer composition and usage of an alternative enzyme inhibitor for background correction (glucose instead of castanospermine. Both methods have been tested in parallel on 144 semen samples, obtained from 94 patients/donors and 50 vasectomized men (negative control, respectively. Passing-Bablok regression analysis demonstrated equal assay performance. In terms of assay validation, analytical specificity, detection limit, measuring range, precision, and cut-off values have been calculated. These data confirm that the adapted method is a reliable, improved tool for NAG analysis in human semen.

  16. Improved sensitivity of an acid sphingomyelinase activity assay using a C6:0 sphingomyelin substrate

    OpenAIRE

    Wei-Lien Chuang; Joshua Pacheco; Samantha Cooper; Kingsbury, Jonathan S.; John Hinds; Pavlina Wolf; Petra Oliva; Joan Keutzer; Cox, Gerald F.; Kate Zhang

    2015-01-01

    Short-chain C6-sphingomyelin is an artificial substrate that was used in an acid sphingomyelinase activity assay for a pilot screening study of patients with Niemann–Pick disease types A and B. Using previously published multiplex and single assay conditions, normal acid sphingomyelinase activity levels (i.e. false negative results) were observed in two sisters with Niemann–Pick B who were compound heterozygotes for two missense mutations, p.C92W and p.P184L, in the SMPD1 gene. Increasing the...

  17. Diagnostic Accuracy of the GenoType MTBDRsl Assay for Rapid Diagnosis of Extensively Drug-Resistant Tuberculosis in HIV-Coinfected Patients

    OpenAIRE

    Kontsevaya, Irina; Ignatyeva, Olga; Nikolayevskyy, Vladyslav; Balabanova, Yanina; Kovalyov, Alexander; Kritsky, Andrey; Matskevich, Olesya; Drobniewski, Francis

    2013-01-01

    The Russian Federation is a high-tuberculosis (TB)-burden country with high rates of Mycobacterium tuberculosis multidrug resistance (MDR) and extensive drug resistance (XDR), especially in HIV-coinfected patients. Rapid and reliable diagnosis for detection of resistance to second-line drugs is vital for adequate patient management. We evaluated the performance of the GenoType MTBDRsl (Hain Lifescience GmbH, Nehren, Germany) assay on smear-positive sputum specimens obtained from 90 HIV-infect...

  18. Application of an Improved Enzyme-Linked Immunosorbent Assay Method for Serological Diagnosis of Canine Leishmaniasis

    NARCIS (Netherlands)

    N. Santarem; R. Silvestre; L. Cardoso; H. Schallig; S.G. Reed; A. Cordeiro-da-Silva

    2010-01-01

    Accurate diagnosis of canine leishmaniasis (CanL) is essential toward a more efficient control of this zoonosis, but it remains problematic due to the high incidence of asymptomatic infections. In this study, we present data on the development of enzyme-linked immunosorbent assay (ELISA)-based techn

  19. Improved mass spectrometry assay for plasma hepcidin: detection and characterization of a novel hepcidin isoform

    NARCIS (Netherlands)

    Laarakkers, C.M.; Wiegerinck, E.T.G.; Klaver, S.; Kolodziejczyk, M.; Gille, H.; Hohlbaum, A.M.; Tjalsma, H.; Swinkels, D.W.

    2013-01-01

    Mass spectrometry (MS)-based assays for the quantification of the iron regulatory hormone hepcidin are pivotal to discriminate between the bioactive 25-amino acid form that can effectively block the sole iron transporter ferroportin and other naturally occurring smaller isoforms without a known role

  20. Improvements in dose accuracy delivered with static-MLC IMRT on an integrated linear accelerator control system

    Energy Technology Data Exchange (ETDEWEB)

    Li Ji; Wiersma, Rodney D.; Stepaniak, Christopher J.; Farrey, Karl J.; Al-Hallaq, Hania A. [Department of Radiation and Cellular Oncology, University of Chicago, 5758 South Maryland Avenue, MC9006, Chicago, Illinois 60637 (United States)

    2012-05-15

    Trilogy and the TrueBeam up to 10 MU/segment, at all dose rates greater than 100 MU/min. The linear trend of decreasing dose accuracy as a function of increasing dose rate on the Trilogy is no longer apparent on TrueBeam, even for dose rates as high as 2400 MU/min. Dose inaccuracy averaged over all ten segments in each beam delivery sequence was larger for Trilogy than TrueBeam, with the largest discrepancy (0.2% vs 3%) occurring for 1 MU/segment beams at both 300 and 600 MU/min. Conclusions: Earlier generations of Varian LINACs exhibited large dose variations for small MU segments in SMLC-IMRT delivery. Our results confirmed these findings. The dose delivery accuracy for SMLC-IMRT is significantly improved on TrueBeam compared to Trilogy for every combination of low MU/segment (1-10) and high dose rate (200-600 MU/min), in part due to the faster sampling rate (100 vs 20 Hz) and enhanced electronic integration of the MLC controller with the LINAC. SMLC-IMRT can be implemented on TrueBeam with higher dose accuracy per beam ({+-}0.2% vs {+-}3%) than previous generations of Varian C-series LINACs for 1 MU/segment delivered at 600 MU/min).

  1. Evaluating Landsat 8 Satellite Sensor Data for Improved Vegetation Mapping Accuracy of the New Hampshire Coastal Watershed Area

    Science.gov (United States)

    Ledoux, Lindsay

    the previous Landsat sensor (Landsat 7). Once classification had been performed, traditional and area-based accuracy assessments were implemented. Comparison measures were also calculated (i.e. Kappa, Z test statistic). The results from this study indicate that, while using Landsat 8 imagery is useful, the additional spectral bands provided in the Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) do not provide an improvement in vegetation classification accuracy in this study.

  2. Applying machine learning approaches to improving the accuracy of breast-tumour diagnosis via fine needle aspiration

    Institute of Scientific and Technical Information of China (English)

    YUAN Qian-fei; CAI Cong-zhong; XIAO Han-guang; LIU Xing-hua

    2007-01-01

    Diagnosis and treatment of breast cancer have been improved during the last decade; however, breast cancer is still a leading cause of death among women in the whole world. Early detection and accurate diagnosis of this disease has been demonstrated an approach to long survival of the patients. As an attempt to develop a reliable diagnosing method for breast cancer, we integrated support vector machine (SVM), k-nearest neighbor and probabilistic neural network into a complex machine learning approach to detect malignant breast tumour through a set of indicators consisting of age and ten cellular features of fine-needle aspiration of breast which were ranked according to signal-to-noise ratio to identify determinants distinguishing benign breast tumours from malignant ones. The method turned out to significantly improve the diagnosis, with a sensitivity of 94.04%, a specificity of 97.37%, and an overall accuracy up to 96.24% when SVM was adopted with the sigmoid kernel function under 5-fold cross validation. The results suggest that SVM is a promising methodology to be further developed into a practical adjunct implement to help discerning benign and malignant breast tumours and thus reduce the incidence of misdiagnosis.

  3. Improving neutron activation analysis accuracy for the measurement of gold in the characterization of heterogeneous catalysts using a TRIGA reactor

    International Nuclear Information System (INIS)

    To measure the gold content of a catalyst accurately, neutron activation analysis (NAA) is one of the methods of choice. NAA is preferred for such heterogeneous catalysts because: (1) it requires minimal sample preparation; (2) NAA provides consistent and accurate results; and (3) in most cases results are obtained much quicker than competing methods. NAA is also used as a referee for the other elemental techniques when results do not fall within expected statistical uncertainties. However, at very high gold concentrations, applying NAA to determine the gold in a heterogeneous catalyst is more challenging than a routine NAA procedure. On the one hand, the neutron absorption cross section for gold is very high, resulting in significant self-shielding related errors. On the other hand, gold exhibits low energy resonance neutron absorptions. In this application the self-shielding minimization effort was handled more rigorously than the classic suppression of neutron flux within a specimen. This non-routine approach was used because: (1) for most applications, high accuracy, <3 % relative, is desired, (2) the low energy resonances of gold make its neutron reaction rate complex and (3) the TRIGA reactor flux profile used in this study contains both thermal and significant epithermal neutron fluxes. Accuracy and precision, using this new approach, are expected to improve from 15 % to better than 3 % relative uncertainty. This has been accomplished through a rigorous assessment of the observed effects of low energy resonance on the neutron flux spectral shape within the sample and designing an experiment to minimize the effects. (author)

  4. Deriving bio-equivalents from in vitro bioassays: assessment of existing uncertainties and strategies to improve accuracy and reporting.

    Science.gov (United States)

    Wagner, Martin; Vermeirssen, Etiënne L M; Buchinger, Sebastian; Behr, Maximilian; Magdeburg, Axel; Oehlmann, Jörg

    2013-08-01

    Bio-equivalents (e.g., 17β-estradiol or dioxin equivalents) are commonly employed to quantify the in vitro effects of complex human or environmental samples. However, there is no generally accepted data analysis strategy for estimating and reporting bio-equivalents. Therefore, the aims of the present study are to 1) identify common mathematical models for the derivation of bio-equivalents from the literature, 2) assess the ability of those models to correctly predict bio-equivalents, and 3) propose measures to reduce uncertainty in their calculation and reporting. We compiled a database of 234 publications that report bio-equivalents. From the database, we extracted 3 data analysis strategies commonly used to estimate bio-equivalents. These models are based on linear or nonlinear interpolation, and the comparison of effect concentrations (ECX ). To assess their accuracy, we employed simulated data sets in different scenarios. The results indicate that all models lead to a considerable misestimation of bio-equivalents if certain mathematical assumptions (e.g., goodness of fit, parallelism of dose-response curves) are violated. However, nonlinear interpolation is most suitable to predict bio-equivalents from single-point estimates. Regardless of the model, subsequent linear extrapolation of bio-equivalents generates additional inaccuracy if the prerequisite of parallel dose-response curves is not met. When all these factors are taken into consideration, it becomes clear that data analysis introduces considerable uncertainty in the derived bio-equivalents. To improve accuracy and transparency of bio-equivalents, we propose a novel data analysis strategy and a checklist for reporting Minimum Information about Bio-equivalent ESTimates (MIBEST).

  5. Improved longitudinal length accuracy of gross tumor volume delineation with diffusion weighted magnetic resonance imaging for esophageal squamous cell carcinoma

    International Nuclear Information System (INIS)

    To analyze the longitudinal length accuracy of gross tumor volume (GTV) delineation with diffusion weighted magnetic resonance imaging for esophageal squamous cell carcinoma (SCC). Forty-two patients from December 2011 to June 2012 with esophageal SCC who underwent radical surgery were analyzed. Routine computed tomography (CT) scan, T2-weighted MRI and diffusion weighted magnetic resonance imaging (DWI) were employed before surgery. Diffusion-sensitive gradient b-values were taken at 400, 600, and 800 s/mm2. Gross tumor volumes (GTV) were delineated using CT, T2-weighted MRI and DWI on different b-value images. GTV longitude length measured using the imaging modalities listed above was compared with pathologic lesion length to determine the most accurate imaging modality. CMS Xio radiotherapy planning system was used to fuse DWI scans and CT images to investigate the possibility of delineating GTV on fused images. The differences between the GTV length according to CT, T2-weighted MRI and pathology were 3.63 ± 12.06 mm and 3.46 ± 11.41 mm, respectively. When the diffusion-sensitive gradient b-value was 400, 600, and 800 s/mm2, the differences between the GTV length using DWI and pathology were 0.73 ± 6.09 mm, -0.54 ± 6.03 mm and −1.58 ± 5.71 mm, respectively. DWI scans and CT images were fused accurately using the radiotherapy planning system. GTV margins were depicted clearly on fused images. DWI displays esophageal SCC lengths most precisely when compared with CT or regular MRI. DWI scans fused with CT images can be used to improve accuracy to delineate GTV in esophageal SCC

  6. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Prashant, E-mail: p.patel@bham.ac.uk [University of Birmingham, School of Cancer Sciences, Vincent Drive (United Kingdom); Rangarajan, Balaji; Mangat, Kamarjit, E-mail: kamarjit.mangat@uhb.nhs.uk, E-mail: kamarjit.mangat@nhs.net [University Hospital Birmingham NHS Trust, Department of Radiology (United Kingdom)

    2015-08-15

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree.

  7. Evaluation of the Diagnostic Accuracy of a New Dengue IgA Capture Assay (Platelia Dengue IgA Capture, Bio-Rad) for Dengue Infection Detection

    OpenAIRE

    Sophie De Decker; Muriel Vray; Viridiana Sistek; Bhety Labeau; Antoine Enfissi; Dominique Rousset; Séverine Matheus

    2015-01-01

    Considering the short lifetime of IgA antibodies in serum and the key advantages of antibody detection ELISAs in terms of sensitivity and specificity, Bio-Rad has just developed a new ELISA test based on the detection of specific anti-dengue IgA. This study has been carried out to assess the performance of this Platelia Dengue IgA Capture assay for dengue infection detection. A total of 184 well-characterized samples provided by the French Guiana NRC sera collection (Laboratory of Virology, I...

  8. Devices, systems, and methods for conducting assays with improved sensitivity using sedimentation

    Energy Technology Data Exchange (ETDEWEB)

    Schaff, Ulrich Y.; Koh, Chung-Yan; Sommer, Gregory J.

    2016-04-05

    Embodiments of the present invention are directed toward devices, systems, and method for conducting assays using sedimentation. In one example, a method includes layering a mixture on a density medium, subjecting sedimentation particles in the mixture to sedimentation forces to cause the sedimentation particles to move to a detection area through a density medium, and detecting a target analyte in a detection region of the sedimentation channel. In some examples, the sedimentation particles and labeling agent may have like charges to reduce non-specific binding of labeling agent and sedimentation particles. In some examples, the density medium is provided with a separation layer for stabilizing the assay during storage and operation. In some examples, the sedimentation channel may be provided with a generally flat sedimentation chamber for dispersing the particle pellet over a larger surface area.

  9. Dissolution Behavior and Content Uniformity of An Improved Tablet Formulation Assayed by Spectrofluorometric and RIA Methods

    Directory of Open Access Journals (Sweden)

    Morteza Rafiee-Tehrani

    1990-06-01

    Full Text Available Digoxin 0.25 mg tablets were manufactured by pregranulation of lactose-fcorn starch with 10% corn starch paste and deposition of solvent on pregranules to make digoxin granules. In the preparation of tablet A, granules of lactose-corn Starch was uniformly moistened with a 5% chloroform-ethanol solution (2:lv/vof digoxin by a simple blending. Tablet B was produced by spray granulation system on which the solvent was sprayed on the granules of lactose-corn starch by utilization of a laboratory size fluidized bed drier (Uniglatt . The content uniformity and dissolution of both tablets were determined by the spectrofluorometric and radio¬immunoassay (RIA method modified for the assay of tablet solutious. One available commercially brand of digoxin tablet (C was included in dissolution study for comparison. For the spectrofluorometric method the technique is based on the fluor-ometric measurenent of the dehydration product of the cardiotonic steroid resulting from its reaction with hydrogen peroxide in concentrated hydrochloric acid. For the RIA method, the filtrate was diluted to theoretical concentration of 2.5 ng/ml."nAliquots of this dilution were then assayed for digoxin content using a commercial digoxin125 I RIA kit. Results from both assay methods were extrapolated to the total tablet content and compared with the labeled amount of 20 individual tablets. All tablet assay results were within the USP standards for the content uniformity and"ndissolution of individual. The individual tablet deviations from labeled amount by RIA method were smaller when compared with the spectrofluorometric method.There was no significant difference between the release of digoxin from three products, and thus it is suggested that the Procedure B could be easily applied for manufacturing"nof digoxin tablets in industrial scales.It was also concluded that,the RIA method could be used for the digoxin tablet determination.

  10. Ultrasonication of pyrogenic microorganisms improves the detection of pyrogens in the Mono Mac 6 assay

    DEFF Research Database (Denmark)

    Moesby, Lise; Hansen, E W; Christensen, J D

    2000-01-01

    The monocytic cell line Mono Mac 6 is sensitive to pyrogens. When exposed to pyrogens secretion of interleukin-6 is induced. However, some eukaryotic pyrogenic microorganisms are not detectable. The aim of this study is to introduce a pretreatment of samples to expand the detection range of the a......The monocytic cell line Mono Mac 6 is sensitive to pyrogens. When exposed to pyrogens secretion of interleukin-6 is induced. However, some eukaryotic pyrogenic microorganisms are not detectable. The aim of this study is to introduce a pretreatment of samples to expand the detection range...... of the assay. The interleukin-6 inducing capacity of a broad spectrum of UV-killed and ultrasonicated microorganisms is examined in Mono Mac 6 cells. The interleukin-6 secretion is determined in a sandwich immunoassay (DELFIA). The Mono Mac 6 assay is able to detect UV-killed Bacillus subtilis, Staphylococcus......, ultrasonication of S. aureus results in a 100-fold increase in the interleukin-6 response. Even after ultrasonication Streptococcus faecalis can not be detected. Ultrasonication is an easy and simple method for expanding the detection range in the Mono Mac 6 assay....

  11. Diagnostic accuracy of the genotype MTBDRsl assay for rapid diagnosis of extensively drug-resistant tuberculosis in HIV-coinfected patients.

    Science.gov (United States)

    Kontsevaya, Irina; Ignatyeva, Olga; Nikolayevskyy, Vladyslav; Balabanova, Yanina; Kovalyov, Alexander; Kritsky, Andrey; Matskevich, Olesya; Drobniewski, Francis

    2013-01-01

    The Russian Federation is a high-tuberculosis (TB)-burden country with high rates of Mycobacterium tuberculosis multidrug resistance (MDR) and extensive drug resistance (XDR), especially in HIV-coinfected patients. Rapid and reliable diagnosis for detection of resistance to second-line drugs is vital for adequate patient management. We evaluated the performance of the GenoType MTBDRsl (Hain Lifescience GmbH, Nehren, Germany) assay on smear-positive sputum specimens obtained from 90 HIV-infected MDR TB patients from Russia. Test interpretability was over 98%. Specificity was over 86% for all drugs, while sensitivity varied, being the highest (71.4%) for capreomycin and lowest (9.4%) for kanamycin, probably due to the presence of mutations in the eis gene. The sensitivity of detection of XDR TB was 13.6%, increasing to 42.9% if kanamycin (not commonly used in Western Europe) was excluded. The assay is a highly specific screening tool for XDR detection in direct specimens from HIV-coinfected TB patients but cannot be used to rule out XDR TB. PMID:23152552

  12. Temporary shielding of hot spots in the drainage areas of cutaneous melanoma improves accuracy of lymphoscintigraphic sentinel lymph node diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Maza, S.; Valencia, R.; Geworski, L.; Zander, A.; Munz, D.L. [Clinic for Nuclear Medicine, University Hospital Charite, Humboldt University of Berlin, Schumannstrasse 20-21, 10117 Berlin (Germany); Draeger, E.; Winter, H.; Sterry, W. [Clinic for Dermatology, Venereology and Allergology, University Hospital Charite, Humboldt University of Berlin, Berlin (Germany)

    2002-10-01

    Detection of the ''true'' sentinel lymph nodes, permitting correct staging of regional lymph nodes, is essential for management and prognostic assessment in malignant melanoma. In this study, it was prospectively evaluated whether simple temporary shielding of hot spots in lymphatic drainage areas could improve the accuracy of sentinel lymph node diagnostics. In 100 consecutive malignant melanoma patients (45 women, 55 men; age 11-91 years), dynamic and static lymphoscintigraphy in various views was performed after strict intracutaneous application of technetium-99m nanocolloid (40-150 MBq; 0.05 ml/deposit) around the tumour (31 patients) or the biopsy scar (69 patients, safety distance 1 cm). The images were acquired with and without temporary lead shielding of the most prominent hot spots in the drainage area. In 33/100 patients, one or two additional sentinel lymph nodes that showed less tracer accumulation or were smaller (<1.5 cm) were detected after shielding. Four of these patients had metastases in the sentinel lymph nodes; the non-sentinel lymph nodes were tumour negative. In 3/100 patients, hot spots in the drainage area proved to be lymph vessels, lymph vessel intersections or lymph vessel ectasias after temporary shielding; hence, a node interpreted as a non-sentinel lymph node at first glance proved to be the real sentinel lymph node. In two of these patients, lymph node metastasis was histologically confirmed; the non-sentinel lymph nodes were tumour free. In 7/100 patients the exact course of lymph vessels could be mapped after shielding. In one of these patients, two additional sentinel lymph nodes (with metastasis) were detected. Overall, in 43/100 patients the temporary shielding yielded additional information, with sentinel lymph node metastases in 7%. In conclusion, when used in combination with dynamic acquisition in various views, temporary shielding of prominent hot spots in the drainage area of a malignant melanoma of the

  13. Validation of an improved enzyme-linked immunosorbent assay for the diagnosis of trypanosomal antibodies in Ghanaian cattle

    International Nuclear Information System (INIS)

    The validation of an enzyme-linked immunosorbent assay (Ab-ELISA) for the detection of antibodies to pathogenic trypanosomes in cattle is described. Two hundred known negative sera obtained from the tsetse-free zone of Dori (Burkina Faso) were analyzed using microtitre plates pre-coated with crude antigen lysates of Trypanosoma congolense and T. vivax. A pre-test optimization was carried out and a percent positivity (PP) of 50% was chosen (specificity: >82%) for assaying field sera. A total of 440 serum samples collected from cattle in areas of known and unknown disease prevalence were assayed. For all animals the packed red cell volume (PCV) was determined and the buffy coat technique (BCT) and blood smears were examined to detect trypanosomes at the species level. A comparison of the BCT and Ab-ELISA results showed there was a much higher prevalence of antibodies to both species than the parasite prevalence as shown by the BCT (10 fold). The rate of agreement between BCT-positive and Ab-ELISA-positive samples for both species was low (<10%). No conclusion could be drawn from this finding because of the low number of known BCT positive cases that were identified. There was a better, albeit highly variable, agreement between BCT-negative and Ab-ELISA-negative samples (30-70%). Proposals for further improvement of the Ab-ELISA and prospects for the use of the assay in the monitoring of trypanosomosis control in Ghana are discussed. (author)

  14. Reflections on Improving the Accuracy of Weather Forecasts%关于提高天气预报准确率的思考

    Institute of Scientific and Technical Information of China (English)

    李学欣

    2014-01-01

    The weather forecast meteorological services in the most basic work. Analyzes the importance of weather forecast accuracy and factors affecting the accuracy of weather forecasts, made several on improving the accuracy of weather forecasts measures for reference.%天气预报是气象服务业中最基础的工作。分析了天气预报准确率的重要性和影响天气预报准确率的因素,提出了几点关于提高天气预报准确率的措施,以供参考。

  15. Non-predictive cueing improves accuracy judgments for voluntary and involuntary spatial and feature/shape attention independent of backward masking.

    OpenAIRE

    Pack, Weston David

    2013-01-01

    Many psychophysics investigations have implemented pre-cues to direct an observer's attention to a specific location or feature. There is controversy over the mechanisms of involuntary attention and whether perceptual or decision processes can enhance target detection and identification as measured by accuracy judgments. Through four main experiments, this dissertation research has indicated that both involuntary and voluntary attention improve target identification and localization accuracy ...

  16. Method to Improve the Accuracy of Electric Energy Measurement%提高电能计量精确度的方法探析

    Institute of Scientific and Technical Information of China (English)

    贾晓旺; 王雷

    2015-01-01

    This paper analyzes several factors that influence the accuracy of electric energy metering, and puts forward a method to improve the accuracy of electric energy measurement.%本研究分析了影响电能计量精确度的几种因素,并对此提出提高电能计量精确度的方法。

  17. Diagnostic Accuracy of Lateral Flow Urine LAM Assay for TB Screening of Adults with Advanced Immunosuppression Attending Routine HIV Care in South Africa.

    Directory of Open Access Journals (Sweden)

    Yasmeen Hanifa

    Full Text Available We assessed the diagnostic accuracy of Determine TB-LAM (LF-LAM to screen for tuberculosis among ambulatory adults established in HIV care in South Africa.A systematic sample of adults attending for HIV care, regardless of symptomatology, were enrolled in the XPHACTOR study, which tested a novel algorithm for prioritising investigation with Xpert MTB/RIF. In this substudy, restricted to participants with enrolment CD495% irrespective of diagnostic reference standard, CD4 stratum, or whether grade 1 or grade 2 cut-off was used.Sensitivity of LF-LAM is too low to recommend as part of intensified case finding in ambulatory patients established in HIV care.

  18. Investigation of polymerase chain reaction assays to improve detection of bacterial involvement in bovine respiratory disease.

    Science.gov (United States)

    Bell, Colin J; Blackburn, Paul; Elliott, Mark; Patterson, Tony I A P; Ellison, Sean; Lahuerta-Marin, Angela; Ball, Hywel J

    2014-09-01

    Bovine respiratory disease (BRD) causes severe economic losses to the cattle farming industry worldwide. The major bacterial organisms contributing to the BRD complex are Mannheimia haemolytica, Histophilus somni, Mycoplasma bovis, Pasteurella multocida, and Trueperella pyogenes. The postmortem detection of these organisms in pneumonic lung tissue is generally conducted using standard culture-based techniques where the presence of therapeutic antibiotics in the tissue can inhibit bacterial isolation. In the current study, conventional and real-time polymerase chain reaction (PCR) assays were used to assess the prevalence of these 5 organisms in grossly pneumonic lung samples from 150 animals submitted for postmortem examination, and the results were compared with those obtained using culture techniques. Mannheimia haemolytica was detected in 51 cases (34%) by PCR and in 33 cases (22%) by culture, H. somni was detected in 35 cases (23.3%) by PCR and in 6 cases (4%) by culture, Myc. bovis was detected in 53 cases (35.3%) by PCR and in 29 cases (19.3%) by culture, P. multocida was detected in 50 cases (33.3%) by PCR and in 31 cases (20.7%) by culture, and T. pyogenes was detected in 42 cases (28%) by PCR and in 31 cases (20.7%) by culture, with all differences being statistically significant. The PCR assays indicated positive results for 111 cases (74%) whereas 82 cases (54.6%) were culture positive. The PCR assays have demonstrated a significantly higher rate of detection of all 5 organisms in cases of pneumonia in cattle in Northern Ireland than was detected by current standard procedures.

  19. Novel molecular and computational methods improve the accuracy of insertion site analysis in Sleeping Beauty-induced tumors.

    Directory of Open Access Journals (Sweden)

    Benjamin T Brett

    Full Text Available The recent development of the Sleeping Beauty (SB system has led to the development of novel mouse models of cancer. Unlike spontaneous models, SB causes cancer through the action of mutagenic transposons that are mobilized in the genomes of somatic cells to induce mutations in cancer genes. While previous methods have successfully identified many transposon-tagged mutations in SB-induced tumors, limitations in DNA sequencing technology have prevented a comprehensive analysis of large tumor cohorts. Here we describe a novel method for producing genetic profiles of SB-induced tumors using Illumina sequencing. This method has dramatically increased the number of transposon-induced mutations identified in each tumor sample to reveal a level of genetic complexity much greater than previously appreciated. In addition, Illumina sequencing has allowed us to more precisely determine the depth of sequencing required to obtain a reproducible signature of transposon-induced mutations within tumor samples. The use of Illumina sequencing to characterize SB-induced tumors should significantly reduce sampling error that undoubtedly occurs using previous sequencing methods. As a consequence, the improved accuracy and precision provided by this method will allow candidate cancer genes to be identified with greater confidence. Overall, this method will facilitate ongoing efforts to decipher the genetic complexity of the human cancer genome by providing more accurate comparative information from Sleeping Beauty models of cancer.

  20. Does gadolinium-based contrast material improve diagnostic accuracy of local invasion in rectal cancer MRI? A multireader study.

    Science.gov (United States)

    Gollub, Marc J; Lakhman, Yulia; McGinty, Katrina; Weiser, Martin R; Sohn, Michael; Zheng, Junting; Shia, Jinru

    2015-02-01

    OBJECTIVE. The purpose of this study was to compare reader accuracy and agreement on rectal MRI with and without gadolinium administration in the detection of T4 rectal cancer. MATERIALS AND METHODS. In this study, two radiologists and one fellow independently interpreted all posttreatment MRI studies for patients with locally advanced or recurrent rectal cancer using unenhanced images alone or combined with contrast-enhanced images, with a minimum interval of 4 weeks. Readers evaluated involvement of surrounding structures on a 5-point scale and were blinded to pathology and disease stage. Sensitivity, specificity, negative predictive value, positive predictive value, and AUC were calculated and kappa statistics were used to describe interreader agreement. RESULTS. Seventy-two patients (38 men and 34 women) with a mean age of 61 years (range, 32-86 years) were evaluated. Fifteen patients had 32 organs invaded. Global AUCs without and with gadolinium administration were 0.79 and 0.77, 0.91 and 0.86, and 0.83 and 0.78 for readers 1, 2, and 3, respectively. AUCs before and after gadolinium administration were similar. Kappa values before and after gadolinium administration for pairs of readers ranged from 0.5 to 0.7. CONCLUSION. On the basis of pathology as a reference standard, the use of gadolinium during rectal MRI did not significantly improve radiologists' agreement or ability to detect T4 disease.

  1. Improving the accuracy of whole genome prediction for complex traits using the results of genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    Full Text Available Utilizing the whole genomic variation of complex traits to predict the yet-to-be observed phenotypes or unobserved genetic values via whole genome prediction (WGP and to infer the underlying genetic architecture via genome wide association study (GWAS is an interesting and fast developing area in the context of human disease studies as well as in animal and plant breeding. Though thousands of significant loci for several species were detected via GWAS in the past decade, they were not used directly to improve WGP due to lack of proper models. Here, we propose a generalized way of building trait-specific genomic relationship matrices which can exploit GWAS results in WGP via a best linear unbiased prediction (BLUP model for which we suggest the name BLUP|GA. Results from two illustrative examples show that using already existing GWAS results from public databases in BLUP|GA improved the accuracy of WGP for two out of the three model traits in a dairy cattle data set, and for nine out of the 11 traits in a rice diversity data set, compared to the reference methods GBLUP and BayesB. While BLUP|GA outperforms BayesB, its required computing time is comparable to GBLUP. Further simulation results suggest that accounting for publicly available GWAS results is potentially more useful for WGP utilizing smaller data sets and/or traits of low heritability, depending on the genetic architecture of the trait under consideration. To our knowledge, this is the first study incorporating public GWAS results formally into the standard GBLUP model and we think that the BLUP|GA approach deserves further investigations in animal breeding, plant breeding as well as human genetics.

  2. Combination of physical exercise and adenosine improves accuracy of automatic calculation of stress LVEF in gated SPECT using QGS software

    International Nuclear Information System (INIS)

    Combining exercise and adenosine during the stress phase of myocardial perfusion imaging (MPI) is known to reduce adverse effects and improve image quality. The aim of this study was to assess whether it can also improve the automatic calculation of left ventricular ejection fraction (LVEF) by QGS software package, during the stress phase of Gated SPECT. One hundred patients who had stress Gated SPECT were retrospectively included in this study. Gated data of those who had adenosine only (50 patients = group A) was compared with those obtained in another group of 50 patients who had added bicycle exercise (Group B). All had identical image acquisition protocol using 99mTc-tetrofosmine. Clinical adverse effects, changes in blood pressure (BP), heart rate (HR), and ECG were monitored. Visual assessment of subdiaphragmatic uptake and accuracy of automatic regions of interest (ROI's) drawn by the software were noted. Regions of interest that involved sub-diaphragmatic uptake and resulting in low LVEF were manually adjusted to include the left ventricle only, and the frequency of manual adjustment was noted. No significant difference was noted in age, sex, baseline BP and HR between groups A and B. Adverse effects occurred less often in group B compared to group A (12% vs. 24%, p = 0.118). Maximum HR and BP achieved during stress were significantly higher in group B compared to group A (p 0.025, p = 0.001 respectively). The number of patients who had faulty ROI's and low LVEF, who needed manual adjustment of ROI.s, were higher in group A compared to group B (16% vs. 6%, p = 0.025). The values of LVEF showed significant improvement following manual adjustment of ROI's, increasing from a mean of 19.63 ± 15.96 to 62.13 ± 7.55 (p = 0.0001) and from 17.33 ± 9.5 to 49.67 ± 7.7 (p = 0.0014) in groups A and B respectively. The addition of exercise to adenosine significantly improves the automatic calculation of LVEF by QGS software during Gated SPECT and reduces the need

  3. Improved assay for determination of busulfan by liquid chromatography using postcolumn photolysis.

    Science.gov (United States)

    Jenke, Andreas; Renner, Ulf; Schuler, Ulrich S; Wauer, Sylvia; Leopold, Traugott; Schleyer, Eberhard; Ehninger, Gerhard

    2004-06-01

    A highly sensitive and time-reduced HPLC assay for the quantitative analysis of busulfan in plasma and aqueous samples is described. The assay is based on a precolumn derivatization of busulfan to 1,4-diiodobutane and UV-detection of iodide ions generated by a postcolumn photochemical dissociation of the derivative. The extraction and derivatization were carried out in a one-pot reaction without any solid phase extraction and is therefore suitable for high throughput analysis. Quantification was performed by using 1,5-pentanediol-bis-(methanesulfonate), a homologue of busulfan, as an internal standard. Linearity was demonstrated for concentrations from 50 to 10,000ng/ml. The limit of detection was found at 10ng/ml. Precision is indicated by an intra-day variety of 2.81% and by an inter-day variety of 6.61% for aqueous samples, 2.93 and 5.76% for plasma samples, respectively. The recovery of busulfan in plasma was more than 95%. No coelution with metabolites of busulfan or other drugs used in cancer therapy was found. The method was generated for measurements of busulfan in aqueous or plasma samples and applied in therapeutic drug monitoring of busulfan. PMID:15113551

  4. New methodological improvements in the Microtox® solid phase assay.

    Science.gov (United States)

    Burga Pérez, Karen F; Charlatchka, Rayna; Sahli, Leila; Férard, Jean-François

    2012-01-01

    The classic Microtox® solid phase assay (MSPA) based on the inhibition of light production of the marine bacteria recently renamed Aliivibrio fischeri suffers from various bias and interferences, mainly due to physico-chemical characteristics of the tested solid phase. To precisely assess ecotoxicity of sediments, we have developed an alternative method, named Microtox® leachate phase assay (MLPA), in order to measure the action of dissolved pollutants in the aqueous phase. Two hypotheses were formulated to explain the observed difference between MSPA and MLPA results: a real ecotoxicity of the solid phase or the fixation of bacteria to fine particles and/or organic matter. To estimate the latter, flow cytometry analyses were performed with two fluorochromes (known for their ability to stain bacterial DNA), allowing correction of MSPA measurements and generation of new (corrected) IC50. Comparison of results of MLPA with the new IC50 MSPA allows differentiating real ecotoxic and fixation effect in classic MSPA especially for samples with high amount of fines and/or organic matter. PMID:21962521

  5. Contrast-enhanced small-animal PET/CT in cancer research: strong improvement of diagnostic accuracy without significant alteration of quantitative accuracy and NEMA NU 4–2008 image quality parameters

    OpenAIRE

    Lasnon, Charline; Quak, Elske; Briand, Mélanie; Gu, Zheng; Louis, Marie-Hélène; Aide, Nicolas

    2013-01-01

    Background The use of iodinated contrast media in small-animal positron emission tomography (PET)/computed tomography (CT) could improve anatomic referencing and tumor delineation but may introduce inaccuracies in the attenuation correction of the PET images. This study evaluated the diagnostic performance and accuracy of quantitative values in contrast-enhanced small-animal PET/CT (CEPET/CT) as compared to unenhanced small animal PET/CT (UEPET/CT). Methods Firstly, a NEMA NU 4–2008 phantom (...

  6. Improving the Accuracy of the Water Surface Cover Type in the 30 m FROM-GLC Product

    Directory of Open Access Journals (Sweden)

    Luyan Ji

    2015-10-01

    Full Text Available The finer resolution observation and monitoring of the global land cover (FROM-GLC product makes it the first 30 m resolution global land cover product from which one can extract a global water mask. However, two major types of misclassification exist with this product due to spectral similarity and spectral mixing. Mountain and cloud shadows are often incorrectly classified as water since they both have very low reflectance, while more water pixels at the boundaries of water bodies tend to be misclassified as land. In this paper, we aim to improve the accuracy of the 30 m FROM-GLC water mask by addressing those two types of errors. For the first, we adopt an object-based method by computing the topographical feature, spectral feature, and geometrical relation with cloud for every water object in the FROM-GLC water mask, and set specific rules to determine whether a water object is misclassified. For the second, we perform a local spectral unmixing using a two-endmember linear mixing model for each pixel falling in the water-land boundary zone that is 8-neighborhood connected to water-land boundary pixels. Those pixels with big enough water fractions are determined as water. The procedure is automatic. Experimental results show that the total area of inland water has been decreased by 15.83% in the new global water mask compared with the FROM-GLC water mask. Specifically, more than 30% of the FROM-GLC water objects have been relabeled as shadows, and nearly 8% of land pixels in the water-land boundary zone have been relabeled as water, whereas, on the contrary, fewer than 2% of water pixels in the same zone have been relabeled as land. As a result, both the user’s accuracy and Kappa coefficient of the new water mask (UA = 88.39%, Kappa = 0.87 have been substantially increased compared with those of the FROM-GLC product (UA = 81.97%, Kappa = 0.81.

  7. Improved rapid molecular diagnosis of multidrug-resistant tuberculosis using a new reverse hybridization assay, REBA MTB-MDR

    Science.gov (United States)

    Bang, Hyeeun; Park, Sangjung; Hwang, Joohwan; Jin, Hyunwoo; Cho, Eunjin; Kim, Dae Yoon; Song, Taeksun; Shamputa, Isdore Chola; Via, Laura E.; Barry, Clifton E.; Cho, Sang-Nae

    2011-01-01

    Rapid diagnosis of multidrug-resistant tuberculosis (MDR-TB) is essential for the prompt initiation of effective second-line therapy to improve treatment outcome and limit transmission of this obstinate disease. A variety of molecular methods that enable the rapid detection of mutations implicated in MDR-TB have been developed. The sensitivity of the methods is dependent, in principle, on the repertoire of mutations being detected, which is typically limited to mutations in the genes rpoB, katG and the promoter region of inhA. In this study, a new reverse hybridization assay, REBA MTB-MDR (M&D), that probes mutations in the oxyR–ahpC intergenic region, in addition to those in rpoB, katG and the inhA promoter region, was evaluated. A set of 240 Mycobacterium tuberculosis clinical isolates from patients receiving retreatment regimens was subjected to conventional phenotypic drug-susceptibility testing (DST) and the REBA MTB-MDR assay. The nucleotide sequences of the loci known to be involved in drug resistance were determined for comparison. In brief, the results showed that the REBA MTB-MDR assay efficiently recognized nucleotide changes in the oxyR–ahpC intergenic region as well as those in rpoB, katG and the inhA promoter region with higher sensitivity, resulting in an 81.0 % detection rate for isoniazid resistance. Inclusion of the oxyR–ahpC intergenic region in the REBA MTB-MDR assay improved the overall sensitivity of molecular DST for MDR-TB from 73.1 to 79.9 %. PMID:21596910

  8. Measures of improving engineering budget compiling accuracy%提高工程预算编制准确性的措施

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    The thesis analyzes the necessity of improving building engineering budget compiling accuracy,studies factors influencing engineering budget compiling accuracy,and puts forward measures of improving engineering budget compiling accuracy,such as improving compilation level, accurately calculating BOQ,being familiar with market conditions,and accurately determining engineering construction technologies and so on.%分析了提高建筑工程预算编制准确性的必要性,对影响工程预算编制准确性的因素进行了研究,提出了提高编制水平、准确计算工程量、了解市场行情、准确确定工程施工工艺等促进工程预算编制准确性的措施。

  9. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Directory of Open Access Journals (Sweden)

    Bailing Liu

    2015-04-01

    Full Text Available An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF and multi-sensor optimal information fusion algorithm (MOIFA, are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%~78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 x 0.8 x 1 ~ 2 x 0.8 x 1  m in the field of view (FOV is indicated by the experimental results.

  10. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  11. msCentipede: Modeling Heterogeneity across Genomic Sites and Replicates Improves Accuracy in the Inference of Transcription Factor Binding.

    Science.gov (United States)

    Raj, Anil; Shim, Heejung; Gilad, Yoav; Pritchard, Jonathan K; Stephens, Matthew

    2015-01-01

    Understanding global gene regulation depends critically on accurate annotation of regulatory elements that are functional in a given cell type. CENTIPEDE, a powerful, probabilistic framework for identifying transcription factor binding sites from tissue-specific DNase I cleavage patterns and genomic sequence content, leverages the hypersensitivity of factor-bound chromatin and the information in the DNase I spatial cleavage profile characteristic of each DNA binding protein to accurately infer functional factor binding sites. However, the model for the spatial profile in this framework fails to account for the substantial variation in the DNase I cleavage profiles across different binding sites. Neither does it account for variation in the profiles at the same binding site across multiple replicate DNase I experiments, which are increasingly available. In this work, we introduce new methods, based on multi-scale models for inhomogeneous Poisson processes, to account for such variation in DNase I cleavage patterns both within and across binding sites. These models account for the spatial structure in the heterogeneity in DNase I cleavage patterns for each factor. Using DNase-seq measurements assayed in a lymphoblastoid cell line, we demonstrate the improved performance of this model for several transcription factors by comparing against the Chip-seq peaks for those factors. Finally, we explore the effects of DNase I sequence bias on inference of factor binding using a simple extension to our framework that allows for a more flexible background model. The proposed model can also be easily applied to paired-end ATAC-seq and DNase-seq data. msCentipede, a Python implementation of our algorithm, is available at http://rajanil.github.io/msCentipede.

  12. msCentipede: Modeling Heterogeneity across Genomic Sites and Replicates Improves Accuracy in the Inference of Transcription Factor Binding.

    Directory of Open Access Journals (Sweden)

    Anil Raj

    Full Text Available Understanding global gene regulation depends critically on accurate annotation of regulatory elements that are functional in a given cell type. CENTIPEDE, a powerful, probabilistic framework for identifying transcription factor binding sites from tissue-specific DNase I cleavage patterns and genomic sequence content, leverages the hypersensitivity of factor-bound chromatin and the information in the DNase I spatial cleavage profile characteristic of each DNA binding protein to accurately infer functional factor binding sites. However, the model for the spatial profile in this framework fails to account for the substantial variation in the DNase I cleavage profiles across different binding sites. Neither does it account for variation in the profiles at the same binding site across multiple replicate DNase I experiments, which are increasingly available. In this work, we introduce new methods, based on multi-scale models for inhomogeneous Poisson processes, to account for such variation in DNase I cleavage patterns both within and across binding sites. These models account for the spatial structure in the heterogeneity in DNase I cleavage patterns for each factor. Using DNase-seq measurements assayed in a lymphoblastoid cell line, we demonstrate the improved performance of this model for several transcription factors by comparing against the Chip-seq peaks for those factors. Finally, we explore the effects of DNase I sequence bias on inference of factor binding using a simple extension to our framework that allows for a more flexible background model. The proposed model can also be easily applied to paired-end ATAC-seq and DNase-seq data. msCentipede, a Python implementation of our algorithm, is available at http://rajanil.github.io/msCentipede.

  13. Improving the efficiency and accuracy of individual tree crown delineation from high-density LiDAR data

    Science.gov (United States)

    Hu, Baoxin; Li, Jili; Jing, Linhai; Judah, Aaron

    2014-02-01

    Canopy height model (CHM) derived from LiDAR (Light Detection And Ranging) data has been commonly used to generate segments of individual tree crowns for forest inventory and sustainable management. However, branches, tree crowns, and tree clusters usually have similar shapes and overlapping sizes, which cause current individual tree crown delineation methods to work less effectively on closed canopy, deciduous or mixedwood forests. In addition, the potential of 3-dimentional (3-D) LiDAR data is not fully realized by CHM-oriented methods. In this study, a framework was proposed to take advantage of the simplicity of a CHM-oriented method, detailed vertical structures of tree crowns represented in high-density LiDAR data, and any prior knowledge of tree crowns. The efficiency and accuracy of ITC delineation can be improved. This framework consists of five steps: (1) determination of dominant crown sizes; (2) generation of initial tree segments using a multi-scale segmentation method; (3) identification of “problematic” segments; (4) determination of the number of trees based on the 3-D LiDAR points in each of the identified segments; and (5) refinement of the “problematic” segments by splitting and merging operations. The proposed framework was efficient, since the detailed examination of 3-D LiDAR points was not applied to all initial segments, but only to those needed further evaluations based on prior knowledge. It was also demonstrated to be effective based on an experiment on natural forests in Ontario, Canada. The proposed framework and specific methods yielded crown maps having a good consistency with manual and visual interpretation. The automated method correctly delineated about 74% and 72% of the tree crowns in two plots with mixedwood and deciduous trees, respectively.

  14. High accuracy mass spectrometry analysis as a tool to verify and improve gene annotation using Mycobacterium tuberculosis as an example

    Directory of Open Access Journals (Sweden)

    Prasad Swati

    2008-07-01

    Full Text Available Abstract Background While the genomic annotations of diverse lineages of the Mycobacterium tuberculosis complex are available, divergences between gene prediction methods are still a challenge for unbiased protein dataset generation. M. tuberculosis gene annotation is an example, where the most used datasets from two independent institutions (Sanger Institute and Institute of Genomic Research-TIGR differ up to 12% in the number of annotated open reading frames, and 46% of the genes contained in both annotations have different start codons. Such differences emphasize the importance of the identification of the sequence of protein products to validate each gene annotation including its sequence coding area. Results With this objective, we submitted a culture filtrate sample from M. tuberculosis to a high-accuracy LTQ-Orbitrap mass spectrometer analysis and applied refined N-terminal prediction to perform comparison of two gene annotations. From a total of 449 proteins identified from the MS data, we validated 35 tryptic peptides that were specific to one of the two datasets, representing 24 different proteins. From those, 5 proteins were only annotated in the Sanger database. In the remaining proteins, the observed differences were due to differences in annotation of transcriptional start sites. Conclusion Our results indicate that, even in a less complex sample likely to represent only 10% of the bacterial proteome, we were still able to detect major differences between different gene annotation approaches. This gives hope that high-throughput proteomics techniques can be used to improve and validate gene annotations, and in particular for verification of high-throughput, automatic gene annotations.

  15. A Method for Accuracy of Genetic Evaluation by Utilization of Canadian Genetic Evaluation Information to Improve Heilongjiang Holstein Herds

    Institute of Scientific and Technical Information of China (English)

    DING Ke-wei; TAKEO Kayaba

    2004-01-01

    The objectives of this study were to set up a new genetic evaluation procedure to predict the breeding values of Holstein herds in Heilongjiang Province of China for milk and fat production by utilizing Canadian pedigree and genetic evaluation information and to compare the breeding values of the sires from different countries. The data used for evaluating young sires for the Chinese Holstein population consisted of records selected from 21 herds in HeiIongjiang Province. The first lactation records of 2 496 daughters collected in 1989 and 2000 were analyzed. A single-trait animal model including a fixed herd-year effect, random animal and residual effects was used by utilizing Canadian pedigree and genetic evaluation information of 5 126 sires released from the Canadian Dairy Network in August 2000. The BLUP procedure was used to evaluate all cattle in this study and the Estimated Breeding Values (EBV)for milk and fat production of 6 697 cattle (including 673 sires and 6 024 cows) were predicted. The genetic levels of the top 100 sires originated from different countries were compared.Unlike the BLUP procedure that is being used in conjunction with the single-trait sire model in Heilongjiang Province of China now, the genetic evaluation procedure used in this study not only can be used simultaneously to evaluate sires and cows but also increase the accuracy of evaluation due to using the relationships and genetic values of the Canadian evaluated sires with more daughters. The results showed that the new procedure was useful for genetic evaluation of dairy herds and the comparison of the breeding values of these sires imported from different countries showed that a significant genetic improvement has been achieved for milk production of the Heilongjiang Holstein dairy population by importing sires from foreign countries, especially from the United States due to the higher breeding values.

  16. Maximum information with minimum complexity from a coincidence assay system

    International Nuclear Information System (INIS)

    Nuclear assays based on coincidence measurements can yield more useful information than is usually derived from them. The additional information can be used to improve assay accuracy and reliability with only a modest increase in the complexity of the electronics. A particular three-channel coincidence system that has had practical application is analyzed as an example. (author)

  17. Plasmodium serine hydroxymethyltransferase as a potential anti-malarial target: inhibition studies using improved methods for enzyme production and assay

    Directory of Open Access Journals (Sweden)

    Sopitthummakhun Kittipat

    2012-06-01

    Full Text Available Abstract Background There is an urgent need for the discovery of new anti-malarial drugs. Thus, it is essential to explore different potential new targets that are unique to the parasite or that are required for its viability in order to develop new interventions for treating the disease. Plasmodium serine hydroxymethyltransferase (SHMT, an enzyme in the dTMP synthesis cycle, is a potential target for such new drugs, but convenient methods for producing and assaying the enzyme are still lacking, hampering the ability to screen inhibitors. Methods Production of recombinant Plasmodium falciparum SHMT (PfSHMT and Plasmodium vivax SHMT (PvSHMT, using auto-induction media, were compared to those using the conventional Luria Bertani medium with isopropyl thio-β-D-galactoside (LB-IPTG induction media. Plasmodium SHMT activity, kinetic parameters, and response to inhibitors were measured spectrophotometrically by coupling the reaction to that of 5,10-methylenetetrahydrofolate dehydrogenase (MTHFD. The identity of the intermediate formed upon inactivation of Plasmodium SHMTs by thiosemicarbazide was investigated by spectrophotometry, high performance liquid chromatography (HPLC, and liquid chromatography-mass spectrometry (LC-MS. The active site environment of Plasmodium SHMT was probed based on changes in the fluorescence emission spectrum upon addition of amino acids and folate. Results Auto-induction media resulted in a two to three-fold higher yield of Pf- and PvSHMT (7.38 and 29.29 mg/L compared to that produced in cells induced in LB-IPTG media. A convenient spectrophotometric activity assay coupling Plasmodium SHMT and MTHFD gave similar kinetic parameters to those previously obtained from the anaerobic assay coupling SHMT and 5,10-methylenetetrahydrofolate reductase (MTHFR; thus demonstrating the validity of the new assay procedure. The improved method was adopted to screen for Plasmodium SHMT inhibitors, of which some were originally designed

  18. Sensitivity and specificity of the empirical lymphocyte genome sensitivity (LGS) assay: implications for improving cancer diagnostics.

    Science.gov (United States)

    Anderson, Diana; Najafzadeh, Mojgan; Gopalan, Rajendran; Ghaderi, Nader; Scally, Andrew J; Britland, Stephen T; Jacobs, Badie K; Reynolds, P Dominic; Davies, Justin; Wright, Andrew L; Al-Ghazal, Shariff; Sharpe, David; Denyer, Morgan C

    2014-10-01

    Lymphocyte responses from 208 individuals: 20 with melanoma, 34 with colon cancer, and 4 with lung cancer (58), 18 with suspected melanoma, 28 with polyposis, and 10 with COPD (56), and 94 healthy volunteers were examined. The natural logarithm of the Olive tail moment (OTM) was plotted for exposure to UVA through 5 different agar depths (100 cell measurements/depth) and analyzed using a repeated measures regression model. Responses of patients with cancer plateaued after treatment with different UVA intensities, but returned toward control values for healthy volunteers. For precancerous conditions and suspected cancers, intermediate responses occurred. ROC analysis of mean log OTMs, for cancers plus precancerous/suspect conditions vs. controls, cancer vs. precancerous/suspect conditions plus controls, and cancer vs. controls, gave areas under the curve of 0.87, 0.89, and 0.93, respectively (P<0.001). Optimization allowed test sensitivity or specificity to approach 100% with acceptable complementary measures. This modified comet assay could represent a stand-alone test or an adjunct to other investigative procedures for detecting cancer.

  19. Nondestructive assays of 55-gallon drums containing uranium and transuranic waste using passive-active shufflers

    International Nuclear Information System (INIS)

    A passive-active neutron shuffler for 55-gal. drums of waste has been characterized using more than 1500 active and 500 passive assays on drums with 28 different matrices. Flux-monitor corrections have been improved, the assay accuracy with localized fissile materials in a drum has been characterized, and improvements have been suggested. Minimum detectable masses for 235U with active assays and 240Pueff with passive assays are presented for the various amounts of moderators and absorbers studied

  20. An improved behavioural assay demonstrates that ultrasound vocalizations constitute a reliable indicator of chronic cancer pain and neuropathic pain

    Directory of Open Access Journals (Sweden)

    Selvaraj Deepitha

    2010-03-01

    Full Text Available Abstract Background On-going pain is one of the most debilitating symptoms associated with a variety of chronic pain disorders. An understanding of mechanisms underlying on-going pain, i.e. stimulus-independent pain has been hampered so far by a lack of behavioural parameters which enable studying it in experimental animals. Ultrasound vocalizations (USVs have been proposed to correlate with pain evoked by an acute activation of nociceptors. However, literature on the utility of USVs as an indicator of chronic pain is very controversial. A majority of these inconsistencies arise from parameters confounding behavioural experiments, which include novelty, fear and stress due to restrain, amongst others. Results We have developed an improved assay which overcomes these confounding factors and enables studying USVs in freely moving mice repetitively over several weeks. Using this improved assay, we report here that USVs increase significantly in mice with bone metastases-induced cancer pain or neuropathic pain for several weeks, in comparison to sham-treated mice. Importantly, analgesic drugs which are known to alleviate tumour pain or neuropathic pain in human patients significantly reduce USVs as well as mechanical allodynia in corresponding mouse models. Conclusions We show that studying USVs and mechanical allodynia in the same cohort of mice enables comparing the temporal progression of on-going pain (i.e. stimulus-independent pain and stimulus-evoked pain in these clinically highly-relevant forms of chronic pain.

  1. Evaluation and improvement in the accuracy of a charge-coupled-device-based pyrometer for temperature field measurements of continuous casting billets

    Science.gov (United States)

    Bai, Haicheng; Xie, Zhi; Zhang, Yuzhong; Hu, Zhenwei

    2013-06-01

    This paper presents a radiometric high-temperature field measurement model based on a charge-coupled-device (CCD). According to the model, an intelligent CCD pyrometer with a digital signal processor as the core is developed and its non-uniformity correction algorithm for reducing the differences in accuracy between individual pixel sensors is established. By means of self-adaptive adjustment for the light-integration time, the dynamic range of the CCD is extended and its accuracy in low-temperature range is improved. The non-uniformity correction algorithm effectively reduces the accuracy differences between different pixel sensors. The performance of the system is evaluated through a blackbody furnace and an integrating sphere, the results of which show that the dynamic range of 400 K is obtained and the accuracy in low temperature range is increased by 7 times compared with the traditional method based on the fixed light-integration time. In addition, the differences of accuracy between the on-axis pixel and the most peripheral pixels are decreased from 19.1 K to 2.8 K. Therefore, this CCD pyrometer ensures that the measuring results of all pixels tend to be equal-accuracy distribution across the entire measuring ranges. This pyrometric system has been successfully applied to the temperature field measurements in continuous casting billets.

  2. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    Science.gov (United States)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  3. Evaluation of different operational strategies for lithium ion battery systems connected to a wind turbine for primary frequency regulation and wind power forecast accuracy improvement

    DEFF Research Database (Denmark)

    Swierczynski, Maciej Jozef; Stroe, Daniel Ioan; Stan, Ana-Irina;

    2012-01-01

    High penetration levels of variable wind energy sources can cause problems with their grid integration. Energy storage systems connected to wind turbine/wind power plants can improve predictability of the wind power production and provide ancillary services to the grid. This paper investigates ec...... economics of different operational strategies for Li-ion systems connected to wind turbines for wind power forecast accuracy improvement and primary frequency regulation....

  4. Short communication: Improving accuracy of Jersey genomic evaluations in the United States and Denmark by sharing reference population bulls

    Science.gov (United States)

    The effect on prediction accuracy for Jersey genomic evaluations in Denmark and the United States from using larger reference populations was assessed. Each country contributed genotypes from 1,157 Jersey bulls to the reference population of the other. Eight of 9 traits analyzed by Denmark (milk, fa...

  5. The Impact of Implicit Tasks on Improving the Learners' Writing in Terms of Autonomy and Grammatical Accuracy

    Science.gov (United States)

    Nazari, Nastaran

    2014-01-01

    This paper aims to explore the Iranian EFL (English as a Foreign Language) learners' ability to gain grammatical accuracy in their writing by noticing and correcting their own grammatical errors. Recent literature in language acquisition has emphasized the role of implicit tasks in encouraging learners to develop autonomous language learning…

  6. An Improved Droop Control Method for DC Microgrids Based on Low Bandwidth Communication with DC Bus Voltage Restoration and Enhanced Current Sharing Accuracy

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Guerrero, Josep M.; Sun, Kai;

    2014-01-01

    resistance in a droop-controlled dc microgrid, since the output voltage of each converter cannot be exactly the same, the output current sharing accuracy is degraded. Second, the DC bus voltage deviation increases with the load due to the droop action. In this paper, in order to improve the performance...... information between converter units. The droop controller is employed to achieve independent operation and average voltage and current controllers are used in each converter to simultaneously enhance the current sharing accuracy and restore the dc bus voltage. All of the controllers are realized locally...

  7. Development of C-reactive protein certified reference material NMIJ CRM 6201-b: optimization of a hydrolysis process to improve the accuracy of amino acid analysis.

    Science.gov (United States)

    Kato, Megumi; Kinumi, Tomoya; Yoshioka, Mariko; Goto, Mari; Fujii, Shin-Ichiro; Takatsu, Akiko

    2015-04-01

    To standardize C-reactive protein (CRP) assays, the National Metrology Institute of Japan (NMIJ) has developed a C-reactive protein solution certified reference material, CRM 6201-b, which is intended for use as a primary reference material to enable the SI-traceable measurement of CRP. This study describes the development process of CRM 6201-b. As a candidate material of the CRM, recombinant human CRP solution was selected because of its higher purity and homogeneity than the purified material from human serum. Gel filtration chromatography was used to examine the homogeneity and stability of the present CRM. The total protein concentration of CRP in the present CRM was determined by amino acid analysis coupled to isotope-dilution mass spectrometry (IDMS-AAA). To improve the accuracy of IDMS-AAA, we optimized the hydrolysis process by examining the effect of parameters such as the volume of protein samples taken for hydrolysis, the procedure of sample preparation prior to the hydrolysis, hydrolysis temperature, and hydrolysis time. Under optimized conditions, we conducted two independent approaches in which the following independent hydrolysis and liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) were combined: one was vapor-phase acid hydrolysis (130 °C, 24 h) and hydrophilic interaction liquid chromatography-mass spectrometry (HILIC-MS) method, and the other was microwave-assisted liquid-phase acid hydrolysis (150 °C, 3 h) and pre-column derivatization liquid chromatography-tandem mass spectrometry (LC-MS/MS) method. The quantitative values of the two different amino acid analyses were in agreement within their uncertainties. The certified value was the weighted mean of the results of the two methods. Uncertainties from the value-assignment method, between-method variance, homogeneity, long-term stability, and short-term stability were taken into account in evaluating the uncertainty for a certified value. The certified value and the

  8. Improving the measurement of longitudinal change in renal function: automated detection of changes in laboratory creatinine assay

    Directory of Open Access Journals (Sweden)

    Norman Poh

    2015-04-01

    Full Text Available IntroductionRenal function is reported using the estimates of glomerular filtration rate (eGFR. However, eGFR values are recorded without reference to the particular serum creatinine (SCr assays used to derive them, and newer assays were introduced at different time points across the laboratories in the United Kingdom. These changes may cause systematic bias in eGFR reported in routinely collected data, even though laboratory-reported eGFR values have a correction factor applied.DesignAn algorithm to detect changes in SCr that in turn affect eGFR calculation method was developed. It compares the mapping of SCr values on to eGFR values across a time series of paired eGFR and SCr measurements.SettingRoutinely collected primary care data from 20,000 people with the richest renal function data from the quality improvement in chronic kidney disease trial.ResultsThe algorithm identified a change in eGFR calculation method in 114 (90% of the 127 included practices. This change was identified in 4736 (23.7% patient time series analysed. This change in calibration method was found to cause a significant step change in the reported eGFR values, producing a systematic bias. The eGFR values could not be recalibrated by applying the Modification of Diet in Renal Disease equation to the laboratory reported SCr values.ConclusionsThis algorithm can identify laboratory changes in eGFR calculation methods and changes in SCr assay. Failure to account for these changes may misconstrue renal function changes over time. Researchers using routine eGFR data should account for these effects.  

  9. An improved fluorescent substrate for assaying soluble and membrane-associated ADAM family member activities.

    Science.gov (United States)

    Moss, Marcia L; Minond, Dmitriy; Yoneyama, Toshie; Hansen, Hinrich P; Vujanovic, Nikola; Rasmussen, Fred H

    2016-08-15

    A fluorescent resonance energy transfer substrate with improved sensitivity for ADAM17, -10, and -9 (where ADAM represents a disintegrin and metalloproteinase) has been designed. The new substrate, Dabcyl-Pro-Arg-Ala-Ala-Ala-Homophe-Thr-Ser-Pro-Lys(FAM)-NH2, has specificity constants of 6.3 (±0.3) × 10(4) M(-1) s(-1) and 2.4 (±0.3) × 10(3) M(-1) s(-1) for ADAM17 and ADAM10, respectively. The substrate is more sensitive than widely used peptides based on the precursor tumor necrosis factor-alpha (TNF-alpha) cleavage site, PEPDAB010 or Dabcyl-Ser-Pro-Leu-Ala-Gln-Ala-Val-Arg-Ser-Ser-Lys(FAM)-NH2 and Mca-Pro-Leu-Ala-Gln-Ala-Val-Dpa-Arg-Ser-Ser-Arg-NH2. ADAM9 also processes the new peptide more than 18-fold better than the TNF-alpha-based substrates. The new substrate has a unique selectivity profile because it is processed less efficiently by ADAM8 and MMP1, -2, -3, -8, -9, -12, and -14. This substrate provides a unique tool in which to assess ADAM17, -10, and -9 activities. PMID:27177841

  10. SPH accuracy improvement through the combination of a quasi-Lagrangian shifting transport velocity and consistent ALE formalisms

    Science.gov (United States)

    Oger, G.; Marrone, S.; Le Touzé, D.; de Leffe, M.

    2016-05-01

    This paper addresses the accuracy of the weakly-compressible SPH method. Interpolation defects due to the presence of anisotropic particle structures inherent to the Lagrangian character of the Smoothed Particle Hydrodynamics (SPH) method are highlighted. To avoid the appearance of these structures which are detrimental to the quality of the simulations, a specific transport velocity is introduced and its inclusion within an Arbitrary Lagrangian Eulerian (ALE) formalism is described. Unlike most of existing particle disordering/shifting methods, this formalism avoids the formation of these anisotropic structures while a full consistency with the original Euler or Navier-Stokes equations is maintained. The gain in accuracy, convergence and numerical diffusion of this formalism is shown and discussed through its application to various challenging test cases.

  11. 肝素生物测定法(血浆法)方法学改进研究%Methodology Improvement for Biological Assay of Heparin

    Institute of Scientific and Technical Information of China (English)

    吴超权; 周智; 覃君良; 方珍文

    2015-01-01

    Objective:To improve the plasma method for biological assay of heparin based on China Pharmacopeia currently in effect.Methods:National standard compound of heparin and a batch of sodium heparin injection were chosen, and the coagulation time of the standard heparin and heparin injection was tested by platelet aggregation and coagulation analyzer using 1∶1 diluted rabbit plasma with sodium chloride injection, according to the method collected in appendix XII B of China Pharmacopeia Volume II (version 2010) .Results: Compared with the standard target potency, the recovery rate of the standard was in the range of 98.88%~101.86%, while the recovery rate of the sodium heparin injection was in the range of 102.3%~103.6%.Conclusion:This new method for heparin determination is easy to operate, with objective end-point high accuracy and repetitiveness, and the conifdence limits meet the requirement, we therefore suggest this new method as the improved method for biological assay of heparin.%目的:针对现行《中国药典》附录中使用的肝素生物测定法的血浆法方法学进行改进。方法:选用肝素钠国家标准品和肝素钠注射液1批,采用1∶1氯化钠注射液稀释的兔血浆,用血小板聚集凝血因子分析仪,按《中国药典》2010年版二部附录XⅡ B方法对肝素钠标准品和肝素钠注射液的凝结时间进行测定。结果:已知靶值的标准品效价的回收率为98.88%~101.86%,肝素钠注射液加样回收率为102.30%~103.60%。结论:该肝素测定方法操作简单,终点客观,重现性和准确性好,可信限满足要求,推荐使用该方法作为肝素生物效价测定方法。

  12. Use of Low-Level Sensor Data to Improve the Accuracy of Bluetooth-Based Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Christensen, Lars Tørholm; Krishnan, Rajesh;

    2013-01-01

    Bluetooth sensors have a large detection zone compared with other static vehicle reidentification systems. A larger detection zone increases the probability of detecting a Bluetooth-enabled device in a fast-moving vehicle, yet increases the probability of multiple detection events being triggered...... by Global Positioning System technology. The results showed that the accuracy of the combined and peak-peak methods was higher than that of the other methods and that the employment of the first detection event did not necessarily yield the best travel time estimation....

  13. Simulation Guided Navigation in cranio-maxillo-facial surgery: a new approach to improve intraoperative three-dimensional accuracy and reproducibility during surgery.

    OpenAIRE

    Bianchi, Alberto

    2014-01-01

    The aim of this PhD thesis " Simulation Guided Navigation in cranio- maxillo- facial surgery : a new approach to Improve intraoperative three-dimensional accuracy and reproducibility during surgery ." was at the center of its attention the various applications of a method introduced by our School in 2010 and has as its theme the increase of interest of reproducibility of surgical programs through methods that in whole or in part are using intraoperative navigation. It was introduced in Ortho...

  14. Improving the accuracy of the structure prediction of the third hypervariable loop of the heavy chains of antibodies.

    KAUST Repository

    Messih, Mario Abdel

    2014-06-13

    MOTIVATION: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity and specificity. The canonical structure model allows high accuracy prediction for five of the loops. The third loop of the heavy chain, H3, is the hardest to predict because of its diversity in structure, length and sequence composition. RESULTS: We describe a method, based on the Random Forest automatic learning technique, to select structural templates for H3 loops among a dataset of candidates. These can be used to predict the structure of the loop with a higher accuracy than that achieved by any of the presently available methods. The method also has the advantage of being extremely fast and returning a reliable estimate of the model quality. AVAILABILITY AND IMPLEMENTATION: The source code is freely available at http://www.biocomputing.it/H3Loopred/ .

  15. Improving the accuracy of simulation of radiation-reaction effects with implicit Runge-Kutta-Nyström methods.

    Science.gov (United States)

    Elkina, N V; Fedotov, A M; Herzing, C; Ruhl, H

    2014-05-01

    The Landau-Lifshitz equation provides an efficient way to account for the effects of radiation reaction without acquiring the nonphysical solutions typical for the Lorentz-Abraham-Dirac equation. We solve the Landau-Lifshitz equation in its covariant four-vector form in order to control both the energy and momentum of radiating particles. Our study reveals that implicit time-symmetric collocation methods of the Runge-Kutta-Nyström type are superior in accuracy and better at maintaining the mass-shell condition than their explicit counterparts. We carry out an extensive study of numerical accuracy by comparing the analytical and numerical solutions of the Landau-Lifshitz equation. Finally, we present the results of the simulation of particle scattering by a focused laser pulse. Due to radiation reaction, particles are less capable of penetrating into the focal region compared to the case where radiation reaction is neglected. Our results are important for designing forthcoming experiments with high intensity laser fields. PMID:25353922

  16. Development of Phage Lysin LysA2 for Use in Improved Purity Assays for Live Biotherapeutic Products.

    Science.gov (United States)

    Dreher-Lesnick, Sheila M; Schreier, Jeremy E; Stibitz, Scott

    2015-12-01

    Live biotherapeutic products (LBPs), commonly referred to as probiotics, are typically preparations of live bacteria, such as Lactobacillus and Bifidobacterium species that are considered normal human commensals. Popular interest in probiotics has been increasing with general health benefits being attributed to their consumption, but there is also growing interest in evaluating such products for treatment of specific diseases. While over-the-counter probiotics are generally viewed as very safe, at least in healthy individuals, it must be remembered that clinical studies to assess these products may be done in individuals whose defenses are compromised, such as through a disease process, immunosuppressive clinical treatment, or an immature or aging immune system. One of the major safety criteria for LBPs used in clinical studies is microbial purity, i.e., the absence of extraneous, undesirable microorganisms. The main goal of this project is to develop recombinant phage lysins as reagents for improved purity assays for LBPs. Phage lysins are hydrolytic enzymes containing a cell binding domain that provides specificity and a catalytic domain responsible for lysis and killing. Our approach is to use recombinant phage lysins to selectively kill target product bacteria, which when used for purity assays will allow for outgrowth of potential contaminants under non-selective conditions, thus allowing an unbiased assessment of the presence of contaminants. To develop our approach, we used LysA2, a phage lysin with reported activity against a broad range of Lactobacillus species. We report the lytic profile of a non-tagged recombinant LysA2 against Lactobacillus strains in our collection. We also present a proof-of-concept experiment, showing that addition of partially purified LysA2 to a culture of Lactobacillus jensenii (L. jensenii) spiked with low numbers of Escherichia coli (E. coli) or Staphylococcus aureus (S. aureus ) effectively eliminates or knocks down L

  17. Relative accuracy evaluation.

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    Full Text Available The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms.

  18. Relative accuracy evaluation.

    Science.gov (United States)

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  19. Improvement of the Mutation-Discrimination Threshold for Rare Point Mutations by a Separation-Free Ligase Detection Reaction Assay Based on Fluorescence Resonance Energy Transfer.

    Science.gov (United States)

    Hagihara, Kenta; Tsukagoshi, Kazuhiko; Nakajima, Chinami; Esaki, Shinsuke; Hashimoto, Masahiko

    2016-01-01

    We previously developed a separation-free ligase detection reaction assay based on fluorescence resonance energy transfer from a donor quantum dot to an acceptor fluorescent dye. This assay could successfully detect one cancer mutation among 10 wild-type templates. In the current study, the mutation-discrimination threshold was improved by one order of magnitude by replacing the original acceptor dye (Alexa Fluor 647) with another fluorescent dye (Cyanine 5) that was spectrally similar but more fluorescent. PMID:26960620

  20. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    International Nuclear Information System (INIS)

    Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. (paper)

  1. Improving the accuracy of the structure prediction of the third hypervariable loop of the heavy chains of antibodies

    DEFF Research Database (Denmark)

    Messih, M. A.; Lepore, R.; Marcatili, Paolo;

    2014-01-01

    Motivation: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity and specif......Motivation: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity...... learning technique, to select structural templates for H3 loops among a dataset of candidates. These can be used to predict the structure of the loop with a higher accuracy than that achieved by any of the presently available methods. The method also has the advantage of being extremely fast and returning...... a reliable estimate of the model quality....

  2. The Sum of Tumour-to-Brain Ratios Improves the Accuracy of Diagnosing Gliomas Using 18F-FET PET.

    Directory of Open Access Journals (Sweden)

    Bogdan Malkowski

    Full Text Available Gliomas are common brain tumours, but obtaining tissue for definitive diagnosis can be difficult. There is, therefore, interest in the use of non-invasive methods to diagnose and grade the disease. Although positron emission tomography (PET with 18F-fluorethyltyrosine (18F-FET can be used to differentiate between low-grade (LGG and high-grade (HGG gliomas, the optimal parameters to measure and their cut-points have yet to be established. We therefore assessed the value of single and dual time-point acquisition of 18F-FET PET parameters to differentiate between primary LGGs (n = 22 and HGGs (n = 24. PET examination was considered positive for glioma if the metabolic activity was 1.6-times higher than that of background (contralateral brain, and maximum tissue-brain ratios (TBRmax were calculated 10 and 60 min after isotope administration with their sums and differences calculated from individual time-point values. Using a threshold-based method, the overall sensitivity of PET was 97%. Several analysed parameters were significantly different between LGGs and HGGs. However, in a receiver operating characteristics analysis, TBR sum had the best diagnostic accuracy of 87% and sensitivity, specificity, and positive and negative predictive values of 100%, 72.7%, 80%, and 100%, respectively. 18F-FET PET is valuable for the non-invasive determination of glioma grade, especially when dual time-point metrics are used. TBR sum shows the greatest accuracy, sensitivity, and negative predictive value for tumour grade differentiation and is a simple method to implement. However, the cut-off may differ between institutions and calibration strategies would be useful.

  3. Improving optical fiber current sensor accuracy using artificial neural networks to compensate temperature and minor non-ideal effects

    Science.gov (United States)

    Zimmermann, Antonio C.; Besen, Marcio; Encinas, Leonardo S.; Nicolodi, Rosane

    2011-05-01

    This article presents a practical signal processing methodology, based on Artificial Neural Networks - ANN, to process the measurement signals of typical Fiber Optic Current Sensors - FOCS, achieving higher accuracy from temperature and non-linearity compensation. The proposed idea resolve FOCS primary problems, mainly when it is difficult to determine all errors sources present in the physical phenomenon or the measurement equation becomes too nonlinear to be applied in a wide measurement range. The great benefit of ANN is to get a transfer function for the measurement system taking in account all unknowns, even those from unwanted and unknowing effects, providing a compensated output after the ANN training session. Then, the ANN training is treated like a black box, based on experimental data, where the transfer function of the measurement system, its unknowns and non-idealities are processed and compensated at once, given a fast and robust alternative to the FOCS theoretical method. A real FOCS system was built and the signals acquired from the photo-detectors are processed by the Faraday's Laws formulas and the ANN method, giving measurement results for both signal processing strategies. The coil temperature measurements are also included in the ANN signal processing. To compare these results, a current measuring instrument standard is used together with a metrological calibration procedure. Preliminary results from a variable temperature experiment shows the higher accuracy, better them 0.2% of maximum error, of the ANN methodology, resulting in a quick and robust method to hands with FOCS difficulties on of non-idealities compensation.

  4. A pilot study to determine whether using a lightweight, wearable micro-camera improves dietary assessment accuracy and offers information on macronutrients and eating rate.

    Science.gov (United States)

    Pettitt, Claire; Liu, Jindong; Kwasnicki, Richard M; Yang, Guang-Zhong; Preston, Thomas; Frost, Gary

    2016-01-14

    A major limitation in nutritional science is the lack of understanding of the nutritional intake of free-living people. There is an inverse relationship between accuracy of reporting of energy intake by all current nutritional methodologies and body weight. In this pilot study we aim to explore whether using a novel lightweight, wearable micro-camera improves the accuracy of dietary intake assessment. Doubly labelled water (DLW) was used to estimate energy expenditure and intake over a 14-d period, over which time participants (n 6) completed a food diary and wore a micro-camera on 2 of the days. Comparisons were made between the estimated energy intake from the reported food diary alone and together with the images from the micro-camera recordings. There was an average daily deficit of 3912 kJ using food diaries to estimate energy intake compared with estimated energy expenditure from DLW (P=0·0118), representing an under-reporting rate of 34 %. Analysis of food diaries alone showed a significant deficit in estimated daily energy intake compared with estimated intake from food diary analysis with images from the micro-camera recordings (405 kJ). Use of the micro-camera images in conjunction with food diaries improves the accuracy of dietary assessment and provides valuable information on macronutrient intake and eating rate. There is a need to develop this recording technique to remove user and assessor bias.

  5. Quality improvement process to assess tattoo alignment, set-up accuracy and isocentre reproducibility in pelvic radiotherapy patients

    OpenAIRE

    Elsner, Kelly; Francis, Kate; Hruby, George; Roderick, Stephanie

    2014-01-01

    Introduction This quality improvement study tested three methods of tattoo alignment and isocentre definition to investigate if aligning lateral tattoos to minimise pitch, roll and yaw decreased set-up error, and if defining the isocentre using the lateral tattoos for cranio-caudal (CC) position improved isocentre reproducibility. The study population was patients receiving curative external beam radiotherapy (EBRT) for prostate cancer. The results are applicable to all supine pelvic EBRT pat...

  6. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  7. Orthology inference in nonmodel organisms using transcriptomes and low-coverage genomes: improving accuracy and matrix occupancy for phylogenomics.

    Science.gov (United States)

    Yang, Ya; Smith, Stephen A

    2014-11-01

    Orthology inference is central to phylogenomic analyses. Phylogenomic data sets commonly include transcriptomes and low-coverage genomes that are incomplete and contain errors and isoforms. These properties can severely violate the underlying assumptions of orthology inference with existing heuristics. We present a procedure that uses phylogenies for both homology and orthology assignment. The procedure first uses similarity scores to infer putative homologs that are then aligned, constructed into phylogenies, and pruned of spurious branches caused by deep paralogs, misassembly, frameshifts, or recombination. These final homologs are then used to identify orthologs. We explore four alternative tree-based orthology inference approaches, of which two are new. These accommodate gene and genome duplications as well as gene tree discordance. We demonstrate these methods in three published data sets including the grape family, Hymenoptera, and millipedes with divergence times ranging from approximately 100 to over 400 Ma. The procedure significantly increased the completeness and accuracy of the inferred homologs and orthologs. We also found that data sets that are more recently diverged and/or include more high-coverage genomes had more complete sets of orthologs. To explicitly evaluate sources of conflicting phylogenetic signals, we applied serial jackknife analyses of gene regions keeping each locus intact. The methods described here can scale to over 100 taxa. They have been implemented in python with independent scripts for each step, making it easy to modify or incorporate them into existing pipelines. All scripts are available from https://bitbucket.org/yangya/phylogenomic_dataset_construction. PMID:25158799

  8. Reduction of Motion Artifacts and Improvement of R Peak Detecting Accuracy Using Adjacent Non-Intrusive ECG Sensors

    Science.gov (United States)

    Choi, Minho; Jeong, Jae Jin; Kim, Seung Hun; Kim, Sang Woo

    2016-01-01

    Non-intrusive electrocardiogram (ECG) monitoring has many advantages: easy to measure and apply in daily life. However, motion noise in the measured signal is the major problem of non-intrusive measurement. This paper proposes a method to reduce the noise and to detect the R peaks of ECG in a stable manner in a sitting arrangement using non-intrusive sensors. The method utilizes two capacitive ECG sensors (cECGs) to measure ECG, and another two cECGs located adjacent to the sensors for ECG are added to obtain the information on motion. Then, active noise cancellation technique and the motion information are used to reduce motion noise. To verify the proposed method, ECG was measured indoors and during driving, and the accuracy of the detected R peaks was compared. After applying the method, the sum of sensitivity and positive predictivity increased 8.39% on average and 26.26% maximally in the data. Based on the results, it was confirmed that the motion noise was reduced and that more reliable R peak positions could be obtained by the proposed method. The robustness of the new ECG measurement method will elicit benefits to various health care systems that require noninvasive heart rate or heart rate variability measurements. PMID:27196910

  9. Smart Air Sampling Instruments Have the Ability to Improve the Accuracy of Air Monitoring Data Comparisons Among Nuclear Industry Facilities

    International Nuclear Information System (INIS)

    Valid inter-comparisons of operating performance parameters among all members of the nuclear industry are essential for the implementation of continuous improvement and for obtaining credibility among regulators and the general public. It is imperative that the comparison of performances among different industry facilities be as accurate as possible and normalized to industry-accepted reference standards

  10. An improved enzyme-linked immunosorbent assay for whole-cell determination of methanogens in samples from anaerobic reactors

    DEFF Research Database (Denmark)

    Sørensen, A.H.; Ahring, B.K.

    1997-01-01

    An enzyme-linked immunosorbent assay was developed for the detection of whole cells of methanogens in samples from anaerobic continuously stirred tank digesters treating slurries of solid waste. The assay was found to allow for quantitative analysis of the most important groups of methanogens in ...

  11. Magnifying endoscopy with narrow-band imaging may improve diagnostic accuracy of differentiated gastric intraepithelial neoplasia: a feasibility study

    Institute of Scientific and Technical Information of China (English)

    WANG Shu-fang; YANG Yun-sheng; YUAN Jing; LU Zhong-sheng; ZHANG Xiu-li; SUN Gang; PENG Li-hua; LING-HU En-qiang; MENG Jiang-yun

    2012-01-01

    Background Magnifying narrow-band imaging has enabled observation of the mucosal and vascular patterns of gastrointestinal lesions.This study investigated the potential value of magnifying endoscopy with narrow-band imaging for the classification of gastric intraepithelial neoplasia.Methods Seventy-six patients with gastric intraepithelial neoplasia (82 lesions) at People's Liberation Army General Hospital from December 2009 to November 2010 were analyzed.All patients underwent magnifying endoscopy with narrow-band imaging,and their lesions were differentiated into probable low-grade intraepithelial neoplasia or possible high-grade intraepithelial neoplasia on the basis of the imaging features.Pathologic proof was subsequently obtained by endoscopic submucosal dissection in every case.The validity of magnifying endoscopy with narrow-band imaging was calculated,considering histopathology to be the gold standard.Results Magnifying endoscopy with narrow-band imaging showed 22 low-grade intraepithelial neoplastic lesions and 60 high-grade intraepithelial neoplastic lesions.Of the 22 low-grade intraepithelial neoplastic lesions,16 showed the same results on both imaging and pathology.Of the 60 high-grade intraepithelial neoplastic lesions,53 showed the same results on both imaging and pathology.Thus,the sensitivity of magnifying endoscopy with narrow-band imaging for high-grade intraepithelial neoplasia was 89.83%,which was higher than that for low-grade intraepithelial neoplasia (69.57%).However,the specificity for high-grade intraepithelial neoplasia (69.57%) was lower than that for low-grade intraepithelial neoplasia (89.83%).The overall accuracy of magnifying endoscopy with narrow-band imaging was 84.15%.Conclusions Magnifying endoscopy with narrow-band imaging can distinguish between gastric low- and high-grade intraepithelial neoplasia.It may be a convenient and effective method for the classification of gastric intraepithelial neoplasia.

  12. Improving accuracy in coronary lumen segmentation via explicit calcium exclusion, learning-based ray detection and surface optimization

    Science.gov (United States)

    Lugauer, Felix; Zhang, Jingdan; Zheng, Yefeng; Hornegger, Joachim; Kelm, B. Michael

    2014-03-01

    Invasive cardiac angiography (catheterization) is still the standard in clinical practice for diagnosing coronary artery disease (CAD) but it involves a high amount of risk and cost. New generations of CT scanners can acquire high-quality images of coronary arteries which allow for an accurate identification and delineation of stenoses. Recently, computational fluid dynamics (CFD) simulation has been applied to coronary blood flow using geometric lumen models extracted from CT angiography (CTA). The computed pressure drop at stenoses proved to be indicative for ischemia-causing lesions, leading to non-invasive fractional flow reserve (FFR) derived from CTA. Since the diagnostic value of non-invasive procedures for diagnosing CAD relies on an accurate extraction of the lumen, a precise segmentation of the coronary arteries is crucial. As manual segmentation is tedious, time-consuming and subjective, automatic procedures are desirable. We present a novel fully-automatic method to accurately segment the lumen of coronary arteries in the presence of calcified and non-calcified plaque. Our segmentation framework is based on three main steps: boundary detection, calcium exclusion and surface optimization. A learning-based boundary detector enables a robust lumen contour detection via dense ray-casting. The exclusion of calcified plaque is assured through a novel calcium exclusion technique which allows us to accurately capture stenoses of diseased arteries. The boundary detection results are incorporated into a closed set formulation whose minimization yields an optimized lumen surface. On standardized tests with clinical data, a segmentation accuracy is achieved which is comparable to clinical experts and superior to current automatic methods.

  13. Using Clinical Decision Support and Dashboard Technology to Improve Heart Team Efficiency and Accuracy in a Transcatheter Aortic Valve Implantation (TAVI) Program.

    Science.gov (United States)

    Clarke, Sarah; Wilson, Marisa L; Terhaar, Mary

    2016-01-01

    Heart Team meetings are becoming the model of care for patients undergoing transcatheter aortic valve implantations (TAVI) worldwide. While Heart Teams have potential to improve the quality of patient care, the volume of patient data processed during the meeting is large, variable, and comes from different sources. Thus, consolidation is difficult. Also, meetings impose substantial time constraints on the members and financial pressure on the institution. We describe a clinical decision support system (CDSS) designed to assist the experts in treatment selection decisions in the Heart Team. Development of the algorithms and visualization strategy required a multifaceted approach and end-user involvement. An innovative feature is its ability to utilize algorithms to consolidate data and provide clinically useful information to inform the treatment decision. The data are integrated using algorithms and rule-based alert systems to improve efficiency, accuracy, and usability. Future research should focus on determining if this CDSS improves patient selection and patient outcomes. PMID:27332170

  14. Characterization of decommissioned reactor internals: Direct-assay method assessment

    International Nuclear Information System (INIS)

    This study describes the direct-assay technique for measuring activation levels of irradiated reactor component hardware. It also compares the direct-assay technique with calculational analysis methods that predict activation levels. Direct assay is performed in four steps: (a) planning and component selection, (b) onsite measurements, (c) radiochemical analysis, and (d) data analysis and classification. Uncertainties are estimated for each step of this process, and an overall uncertainty in the classification accuracy is calculated as about ±35%. Numerous research ideas are identified to help reduce the uncertainty level; many of these ideas would improve activation determinations performed by either direct assay or by calculational analysis methods

  15. Characterization of decommissioned reactor internals: Direct-assay method assessment

    Energy Technology Data Exchange (ETDEWEB)

    Cline, J.E.

    1993-03-01

    This study describes the direct-assay technique for measuring activation levels of irradiated reactor component hardware. It also compares the direct-assay technique with calculational analysis methods that predict activation levels. Direct assay is performed in four steps: (a) planning and component selection, (b) onsite measurements, (c) radiochemical analysis, and (d) data analysis and classification. Uncertainties are estimated for each step of this process, and an overall uncertainty in the classification accuracy is calculated as about {plus_minus}35%. Numerous research ideas are identified to help reduce the uncertainty level; many of these ideas would improve activation determinations performed by either direct assay or by calculational analysis methods.

  16. Improving the accuracy of the transient plane source method by correcting probe heat capacity and resistance influences

    Science.gov (United States)

    Li, Yanning; Shi, Chunfeng; Liu, Jian; Liu, Errui; Shao, Jian; Chen, Zhi; Dorantes-Gonzalez, Dante J.; Hu, Xiaotang

    2014-01-01

    The transient plane source (TPS) method is a relatively newly developed transient approach for thermal conductivity measurement. Compared with the steady-state method, it is fast, and applicable to either solid, liquid or gas state materials; therefore, it has gained much popularity in recent years. However, during measurement, the measured power is influenced by the heat capacity of the electrical isolation films as well as the electrical resistance change of the metallic thin wire of the TPS probes. This further influences the measurement precision. Meanwhile, these two factors have been ignored in the traditional model of TPS developed by Gustafsson. In this paper, the influence of both the heat capacity and the resistance change of the TPS probe on the measured power is studied, and mathematical formulas relating the two factors and their respective corrections are deduced. Thereafter an improved model is suggested based on the traditional TPS model and the above theoretical models. Experiments on polymethylmethacrylate (PMMA) standard materials have been conducted using a home-made system, including TPS probes, data acquisition module and analysis software. The results show that the improved model can effectively improve the measurement precision of the TPS method by about 1.8-2.3% as evaluated by relative standard deviation.

  17. The use of multiple versus single assessment time points to improve screening accuracy in identifying children at risk for later serious antisocial behavior.

    Science.gov (United States)

    Petras, Hanno; Buckley, Jacquelyn A; Leoutsakos, Jeannie-Marie S; Stuart, Elizabeth A; Ialongo, Nicholas S

    2013-10-01

    Guided by Kraemer et al.'s (Psychological Methods, 3:257-271, 1999) framework for measuring the potency of risk factors, we sought to improve on the classification accuracy reported in Petras et al. (Journal of the American Academy of Child and Adolescent Psychiatry 43:88-96, 2004a) and Petras et al. (Journal of the American Academy of Child and Adolescent Psychiatry 44:790-797, 2005) by using multiple as opposed to single point in time assessments of early aggressive and disruptive behavior in the classification of youth who would likely benefit from targeted preventive interventions. Different from Petras et al. (2004a, 2005), the outcome used in this study included serious antisocial behavior in young adulthood as well as in adolescence. Among males, the use of multiple time points did not yield greater classification accuracy than the highest single time points, that is, third and fifth grades. For females, although fifth grade represented the best single time point in terms of classification accuracy, no significant association was found between earlier time points and the later outcome, rendering a test of the multiple time points hypothesis moot. The findings presented in this study have strong implications for the design of targeted intervention for violence prevention, indicating that the screening quality based on aggression ratings during the elementary years is rather modest, particularly for females. PMID:23408279

  18. Diagnosis of heart failure with preserved ejection fraction: improved accuracy with the use of markers of collagen turnover.

    LENUS (Irish Health Repository)

    Martos, Ramon

    2012-02-01

    AIMS: Heart failure with preserved ejection fraction (HF-PEF) can be difficult to diagnose in clinical practice. Myocardial fibrosis is a major determinant of diastolic dysfunction (DD), potentially contributing to the progression of HF-PEF. The aim of this study was to analyse whether serological markers of collagen turnover may predict HF-PEF and DD. METHODS AND RESULTS: We included 85 Caucasian treated hypertensive patients (DD n=65; both DD and HF-PEF n=32). Serum carboxy (PICP), amino (PINP), and carboxytelo (CITP) peptides of procollagen type I, amino (PIIINP) peptide of procollagen type III, matrix metalloproteinases (MMP-1, MMP-2, and MMP-9), and tissue inhibitor of MMP levels were assayed. Using receiver operating characteristic curve analysis, MMP-2 (AUC=0.91; 95% CI: 0.84, 0.98), CITP (0.83; 0.72, 0.92), PICP (0.82; 0.72, 0.92), B-type natriuretic peptide (BNP) (0.82; 0.73, 0.91), MMP-9 (0.79; 0.68, 0.89), and PIIINP (0.78; 0.66, 0.89) levels were significant predictors of HF-PEF (P<0.01 for all). Carboxytelo peptides of procollagen type I (AUC=0.74; 95% CI: 0.62, 0.86), MMP-2 (0.73; 0.62, 0.84), PIIINP (0.73; 0.60, 0.85), BNP (0.69; 0.55, 0.83) and PICP (0.66; 0.54, 0.78) levels were significant predictors of DD (P<0.05 for all). A cutoff of 1585 ng\\/mL for MMP-2 provided 91% sensitivity and 76% specificity for predicting HF-PEF and combinations of biomarkers could be used to adjust either sensitivity or specificity. CONCLUSION: Markers of collagen turnover identify patients with HF-PEF and DD. Matrix metalloproteinase 2 may be more useful than BNP in the identification of HF-PEF. This suggests that these new biochemical tools may assist in identifying patients with these diagnostically challenging conditions.

  19. Improved accuracy of cortical bone mineralization measured by polychromatic microcomputed tomography using a novel high mineral density composite calibration phantom

    International Nuclear Information System (INIS)

    Purpose: Microcomputed tomography (micro-CT) is increasingly used as a nondestructive alternative to ashing for measuring bone mineral content. Phantoms are utilized to calibrate the measured x-ray attenuation to discrete levels of mineral density, typically including levels up to 1000 mg HA/cm3, which encompasses levels of bone mineral density (BMD) observed in trabecular bone. However, levels of BMD observed in cortical bone and levels of tissue mineral density (TMD) in both cortical and trabecular bone typically exceed 1000 mg HA/cm3, requiring extrapolation of the calibration regression, which may result in error. Therefore, the objectives of this study were to investigate (1) the relationship between x-ray attenuation and an expanded range of hydroxyapatite (HA) density in a less attenuating polymer matrix and (2) the effects of the calibration on the accuracy of subsequent measurements of mineralization in human cortical bone specimens. Methods: A novel HA-polymer composite phantom was prepared comprising a less attenuating polymer phase (polyethylene) and an expanded range of HA density (0-1860 mg HA/cm3) inclusive of characteristic levels of BMD in cortical bone or TMD in cortical and trabecular bone. The BMD and TMD of cortical bone specimens measured using the new HA-polymer calibration phantom were compared to measurements using a conventional HA-polymer phantom comprising 0-800 mg HA/cm3 and the corresponding ash density measurements on the same specimens. Results: The HA-polymer composite phantom exhibited a nonlinear relationship between x-ray attenuation and HA density, rather than the linear relationship typically employed a priori, and obviated the need for extrapolation, when calibrating the measured x-ray attenuation to high levels of mineral density. The BMD and TMD of cortical bone specimens measured using the conventional phantom was significantly lower than the measured ash density by 19% (p<0.001, ANCOVA) and 33% (p<0.05, Tukey's HSD), on

  20. Improve the Absolute Accuracy of Ozone Intensities in the 9-11 μm Region via Mw/ir Multi-Wavelength Spectroscopy

    Science.gov (United States)

    Yu, Shanshan; Drouin, Brian

    2016-06-01

    Ozone (O_3) is crucial for studies of air quality, human and crop health, and radiative forcing. Spectroscopic remote sensing techniques have been extensively employed to investigate ozone globally and regionally. Infrared intensities of ≤1% accuracy are desired by the remote sensing community. The accuracy of the current state-of-the-art infrared ozone intensities is on the order of 4-10%, resulting in ad hoc intensity scaling factors for consistent atmospheric retrievals. The large uncertainties on the infrared ozone intensities arise from the fact that pure ozone is very difficult to generate and sustain in the laboratory. Best estimates have employed IR/UV cross beam experiments to determine the accurate O_3 volume mixing ratio of the sample through its standard cross section value at 254 nm. This presentation reports our effort to improve the absolute accuracy of ozone intensities in the 9-11 μm region via a transfer of the precision of the rotational dipole moment onto the infrared measurement (MW/IR). Our approach was to use MW/IR cross beam experiments and determine the O_3 mixing ratio through alternately measuring pure rotation ozone lines from 692 to 779 GHz. The uncertainty of these pure rotation line intensities is better than 0.1%. The sample cell was a slow flow cross cell and the total pressure inside the sample cell was maintained constant through a proportional-integral-derivative (PID) flow control. Five infrared O_3 spectra were obtained, with a path length of 3.74 m, pressures ranging from 30 to 120 mTorr, and mixing ratio ranging from 0.5 to 0.9. A multi spectrum fitting technique was employed to fit all the FTS spectra simultaneously. The results show that we can determine intensities of the 9.6μm band with absolute accuracy better than 4%.

  1. Combination of pulse volume recording (PVR) parameters and ankle-brachial index (ABI) improves diagnostic accuracy for peripheral arterial disease compared with ABI alone.

    Science.gov (United States)

    Hashimoto, Tomoko; Ichihashi, Shigeo; Iwakoshi, Shinichi; Kichikawa, Kimihiko

    2016-06-01

    The ankle-brachial index (ABI) measurement is widely used as a screening tool to detect peripheral arterial disease (PAD). With the advent of the oscillometric ABI device incorporating a system for the measurement of pulse volume recording (PVR), not only ABI but also other parameters, such as the percentage of mean arterial pressure (%MAP) and the upstroke time (UT), can be obtained automatically. The purpose of the present study was to compare the diagnostic accuracy for PAD with ABI alone with that of a combination of ABI, %MAP and UT. This study included 108 consecutive patients on whom 216 limb measurements were performed. The sensitivity, specificity and positive and negative predictive values of ABI, %MAP, UT and their combination were evaluated and compared with CT angiography that was used as a gold standard for the detection of PAD. The diagnostic accuracy as well as the optimal cutoff values of %MAP and UT were evaluated using receiver operating characteristic (ROC) curve analysis. The combination of ABI, %MAP and UT achieved higher sensitivity, negative predictive value and accuracy than ABI alone, particularly for mild stenosis. The areas under the ROC curve for the detection of 50% stenosis with UT and %MAP were 0.798 and 0.916, respectively. The optimal UT and %MAP values to detect ≧50% stenosis artery were 183 ms and 45%, respectively. The combination of ABI, %MAP and UT contributed to the improvement of the diagnostic accuracy for PAD. Consideration of the values of %MAP and UT in addition to ABI may have a significant impact on the detection of early PAD lesions.

  2. hARACNe: improving the accuracy of regulatory model reverse engineering via higher-order data processing inequality tests.

    Science.gov (United States)

    Jang, In Sock; Margolin, Adam; Califano, Andrea

    2013-08-01

    A key goal of systems biology is to elucidate molecular mechanisms associated with physiologic and pathologic phenotypes based on the systematic and genome-wide understanding of cell context-specific molecular interaction models. To this end, reverse engineering approaches have been used to systematically dissect regulatory interactions in a specific tissue, based on the availability of large molecular profile datasets, thus improving our mechanistic understanding of complex diseases, such as cancer. In this paper, we introduce high-order Algorithm for the Reconstruction of Accurate Cellular Network (hARACNe), an extension of the ARACNe algorithm for the dissection of transcriptional regulatory networks. ARACNe uses the data processing inequality (DPI), from information theory, to detect and prune indirect interactions that are unlikely to be mediated by an actual physical interaction. Whereas ARACNe considers only first-order indirect interactions, i.e. those mediated by only one extra regulator, hARACNe considers a generalized form of indirect interactions via two, three or more other regulators. We show that use of higher-order DPI resulted in significantly improved performance, based on transcription factor (TF)-specific ChIP-chip data, as well as on gene expression profile following RNAi-mediated TF silencing.

  3. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2016-01-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process...... modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...

  4. Flat panel X-ray detector with reduced internal scattering for improved attenuation accuracy and dynamic range

    Science.gov (United States)

    Smith, Peter D.; Claytor, Thomas N.; Berry, Phillip C.; Hills, Charles R.

    2010-10-12

    An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.

  5. An improved respiratory syncytial virus neutralization assay based on the detection of green fluorescent protein expression and automated plaque counting

    Directory of Open Access Journals (Sweden)

    van Remmerden Yvonne

    2012-10-01

    Full Text Available Abstract Background Virus neutralizing antibodies against respiratory syncytial virus (RSV are considered important correlates of protection for vaccine evaluation. The established plaque reduction assay is time consuming, labor intensive and highly variable. Methods Here, a neutralization assay based on a modified RSV strain expressing the green fluorescent protein in combination with automated detection and quantification of plaques is described. Results The fluorescence plaque reduction assay in microplate format requires only two days to complete and is simple and reproducible. A good correlation between visual and automated counting methods to determine RSV neutralizing serum antibody titers was observed. Conclusions The developed virus neutralization assay is suitable for high-throughput testing and can be used for both animal studies and (large scale vaccine clinical trials.

  6. An improved respiratory syncytial virus neutralization assay based on the detection of green fluorescent protein expression and automated plaque counting

    OpenAIRE

    van Remmerden Yvonne; Xu Fang; van Eldik Mandy; Heldens Jacco GM; Huisman Willem; Widjojoatmodjo Myra N

    2012-01-01

    Abstract Background Virus neutralizing antibodies against respiratory syncytial virus (RSV) are considered important correlates of protection for vaccine evaluation. The established plaque reduction assay is time consuming, labor intensive and highly variable. Methods Here, a neutralization assay based on a modified RSV strain expressing the green fluorescent protein in combination with automated detection and quantification of plaques is described. Results The fluorescence plaque reduction a...

  7. 利用SVM改进Adaboost算法的人脸检测精度%IMPROVING FACE DETECTION ACCURACY IN ADABOOST ALGORITHM WITH SVM

    Institute of Scientific and Technical Information of China (English)

    王志伟; 张晓龙; 梁文豪

    2011-01-01

    提出利用SVM分类方法改进Adaboost算法的人脸检测精度.该方法先通过Adaboost算法找出图像中的候选人脸区域,根据训练样本集中的人脸和非人脸样本训练出分类器支持向量机(SVM),然后通过SVM分类器从候选人脸区域中最终确定人脸区域.实验结果证明,SVM分类算法可以提高检测精度,使检测算法具有更好的检测效果.%This paper presents an approach to improve the face detection accuracy in Adaboost algorithm with SVM. Firstly, the method finds out candidate regions of the human face in the image, and trains the classifier of support vector machine (SVM) according to human face samples and non-face samples in the training sample set, then eventually determine the region of human face from candidate face regions by SVM classifier. Experimental results show that the SVM classifying algorithm can improve the detection accuracy and makes the detection algorithm better in detection efficiency.

  8. Improving accuracy of medication identification in an older population using a medication bottle color symbol label system

    Directory of Open Access Journals (Sweden)

    Cardarelli Roberto

    2011-12-01

    Full Text Available Abstract Background The purpose of this pilot study was to evaluate and refine an adjuvant system of color-specific symbols that are added to medication bottles and to assess whether this system would increase the ability of patients 65 years of age or older in matching their medication to the indication for which it was prescribed. Methods This study was conducted in two phases, consisting of three focus groups of patients from a family medicine clinic (n = 25 and a pre-post medication identification test in a second group of patient participants (n = 100. Results of focus group discussions were used to refine the medication label symbols according to themes and messages identified through qualitative triangulation mechanisms and data analysis techniques. A pre-post medication identification test was conducted in the second phase of the study to assess differences between standard labeling alone and the addition of the refined color-specific symbols. The pre-post test examined the impact of the added labels on participants' ability to accurately match their medication to the indication for which it was prescribed when placed in front of participants and then at a distance of two feet. Results Participants appreciated the addition of a visual aid on existing medication labels because it would not be necessary to learn a completely new system of labeling, and generally found the colors and symbols used in the proposed labeling system easy to understand and relevant. Concerns were raised about space constraints on medication bottles, having too much information on the bottle, and having to remember what the colors meant. Symbols and colors were modified if they were found unclear or inappropriate by focus group participants. Pre-post medication identification test results in a second set of participants demonstrated that the addition of the symbol label significantly improved the ability of participants to match their medication to the appropriate

  9. Dynamic contrast-enhanced MRI improves accuracy for detecting focal splenic involvement in children and adolescents with Hodgkin disease

    International Nuclear Information System (INIS)

    Accurate assessment of splenic disease is important for staging Hodgkin lymphoma. The purpose of this study was to assess T2-weighted imaging with and without dynamic contrast-enhanced (DCE) MRI for evaluation of splenic Hodgkin disease. Thirty-one children with Hodgkin lymphoma underwent whole-body T2-weighted MRI with supplementary DCE splenic imaging, and whole-body PET-CT before and following chemotherapy. Two experienced nuclear medicine physicians derived a PET-CT reference standard for splenic disease, augmented by follow-up imaging. Unaware of the PET-CT, two experienced radiologists independently evaluated MRI exercising a locked sequential read paradigm (T2-weighted then DCE review) and recorded the presence/absence of splenic disease at each stage. Performance of each radiologist was determined prior to and following review of DCE-MRI. Incorrect MRI findings were ascribed to reader (lesion present on MRI but missed by reader) or technical (lesion not present on MRI) error. Seven children had splenic disease. Sensitivity/specificity of both radiologists for the detection of splenic involvement using T2-weighted images alone was 57%/100% and increased to 100%/100% with DCE-MRI. There were three instances of technical error on T2-weighted imaging; all lesions were visible on DCE-MRI. T2-weighted imaging when complemented by DCE-MRI imaging may improve evaluation of Hodgkin disease splenic involvement. (orig.)

  10. Dynamic contrast-enhanced MRI improves accuracy for detecting focal splenic involvement in children and adolescents with Hodgkin disease

    Energy Technology Data Exchange (ETDEWEB)

    Punwani, Shonit; Taylor, Stuart A.; Halligan, Steve [University College London, Centre for Medical Imaging, London (United Kingdom); University College London Hospital, Department of Radiology, London (United Kingdom); Cheung, King Kenneth; Skipper, Nicholas [University College London, Centre for Medical Imaging, London (United Kingdom); Bell, Nichola; Humphries, Paul D. [University College London Hospital, Department of Radiology, London (United Kingdom); Bainbridge, Alan [University College London, Department of Medical Physics and Bioengineering, London (United Kingdom); Groves, Ashley M.; Hain, Sharon F.; Ben-Haim, Simona [University College Hospital, Institute of Nuclear Medicine, London (United Kingdom); Shankar, Ananth; Daw, Stephen [University College London Hospital, Department of Paediatrics, London (United Kingdom)

    2013-08-15

    Accurate assessment of splenic disease is important for staging Hodgkin lymphoma. The purpose of this study was to assess T2-weighted imaging with and without dynamic contrast-enhanced (DCE) MRI for evaluation of splenic Hodgkin disease. Thirty-one children with Hodgkin lymphoma underwent whole-body T2-weighted MRI with supplementary DCE splenic imaging, and whole-body PET-CT before and following chemotherapy. Two experienced nuclear medicine physicians derived a PET-CT reference standard for splenic disease, augmented by follow-up imaging. Unaware of the PET-CT, two experienced radiologists independently evaluated MRI exercising a locked sequential read paradigm (T2-weighted then DCE review) and recorded the presence/absence of splenic disease at each stage. Performance of each radiologist was determined prior to and following review of DCE-MRI. Incorrect MRI findings were ascribed to reader (lesion present on MRI but missed by reader) or technical (lesion not present on MRI) error. Seven children had splenic disease. Sensitivity/specificity of both radiologists for the detection of splenic involvement using T2-weighted images alone was 57%/100% and increased to 100%/100% with DCE-MRI. There were three instances of technical error on T2-weighted imaging; all lesions were visible on DCE-MRI. T2-weighted imaging when complemented by DCE-MRI imaging may improve evaluation of Hodgkin disease splenic involvement. (orig.)

  11. An electron impact emission spectroscopy flux sensor for monitoring deposition rate at high background gas pressure with improved accuracy

    International Nuclear Information System (INIS)

    Electron impact emission spectroscopy (EIES) has been proven to be a critical tool for film composition control during codeposition processes for the fabrication of multicomponent thin film materials including the high-efficiency copper-indium-gallium-diselenide photovoltaic cells. This technique is highly specific to atomic species because the emission spectrum of each element is unique, and the typical width of atomic emission lines is very narrow. Noninterfering emission lines can generally be allocated to different atomic species. However, the electron impact emission spectra of many molecular species are often broadband in nature. When the optical emission from an EIES sensor is measured by using a wavelength selection device with a modest resolution, such as an optical filter or monochromator, the emissions from common residual gases may interfere with that from the vapor flux and cause erroneous flux measurement. The interference is most pronounced when measuring low flux density with the presence of gases such as in reactive deposition processes. This problem is solved by using a novel EIES sensor that has two electron impact excitation sources in separate compartments but with one common port for optical output. The vapor flux is allowed to pass through one compartment only. Using a tristate excitation scheme and appropriate signal processing technique, the interfering signals from residual gases can be completely eliminated from the output signal of the EIES monitor for process control. Data obtained from Cu and Ga evaporations with the presence of common residual gases such as CO2 and H2O are shown to demonstrate the improvement in sensor performance. The new EIES sensor is capable of eliminating the effect of interfering residual gases with pressure as high as in the upper 10-5 Torr range

  12. Improving the Accuracy of the AFWA-NASA (ANSA) Blended Snow-Cover Product over the Lower Great Lakes Region

    Science.gov (United States)

    Hall, Dorothy K.; Foster, James L.; Kumar, Sujay; Chien, Janety Y. L.; Riggs, George A.

    2012-01-01

    The Air Force Weather Agency (AFWA) -- NASA blended snow-cover product, called ANSA, utilizes Earth Observing System standard snow products from the Moderate- Resolution Imaging Spectroradiometer (MODIS) and the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) to map daily snow cover and snow-water equivalent (SWE) globally. We have compared ANSA-derived SWE with SWE values calculated from snow depths reported at 1500 National Climatic Data Center (NCDC) co-op stations in the Lower Great Lakes Basin. Compared to station data, the ANSA significantly underestimates SWE in densely-forested areas. We use two methods to remove some of the bias observed in forested areas to reduce the root-mean-square error (RMSE) between the ANSA- and station-derived SWE. First, we calculated a 5- year mean ANSA-derived SWE for the winters of 2005-06 through 2009-10, and developed a five-year mean bias-corrected SWE map for each month. For most of the months studied during the five-year period, the 5-year bias correction improved the agreement between the ANSA-derived and station-derived SWE. However, anomalous months such as when there was very little snow on the ground compared to the 5-year mean, or months in which the snow was much greater than the 5-year mean, showed poorer results (as expected). We also used a 7-day running mean (7DRM) bias correction method using days just prior to the day in question to correct the ANSA data. This method was more effective in reducing the RMSE between the ANSA- and co-op-derived SWE values, and in capturing the effects of anomalous snow conditions.

  13. Nondestructive assay of sale materials

    International Nuclear Information System (INIS)

    This paper covers three primary areas: (1) reasons for performing nondestructive assay on SALE materials; (2) techniques used; and (3) discussion of investigators' revised results. The study shows that nondestructive calorimetric assay of plutonium offers a viable alternative to traditional wet chemical techniques. For these samples, the precision ranged from 0.4 to 0.6% with biases less than 0.2%. Thus, for those materials where sampling errors are the predominant source of uncertainty, this technique can provide improved accuracy and precision while saving time and money as well as reducing the amount of liquid wastes to be handled. In addition, high resolution gamma-ray spectroscopy measurements of solids can provide isotopic analysis data in a cost effective and timely manner. The timeliness of the method can be especially useful to the plant operator for production control and quality control measurements

  14. Evaluation and improvement of real-time PCR assays targeting lytA, ply, and psaA genes for detection of pneumococcal DNA.

    Science.gov (United States)

    Carvalho, Maria da Gloria S; Tondella, Maria Lucia; McCaustland, Karen; Weidlich, Luciana; McGee, Lesley; Mayer, Leonard W; Steigerwalt, Arnold; Whaley, Melissa; Facklam, Richard R; Fields, Barry; Carlone, George; Ades, Edwin W; Dagan, Ron; Sampson, Jacquelyn S

    2007-08-01

    The accurate diagnosis of pneumococcal disease has frequently been hampered not only by the difficulties in obtaining isolates of the organism from patient specimens but also by the misidentification of pneumococcus-like viridans group streptococci (P-LVS) as Streptococcus pneumoniae. This is especially critical when the specimen comes from the respiratory tract. In this study, three novel real-time PCR assays designed for the detection of specific sequence regions of the lytA, ply, and psaA genes were developed (lytA-CDC, ply-CDC, and psaA, respectively). These assays showed high sensitivity (<10 copies for lytA-CDC and ply-CDC and an approximately twofold less sensitivity for psaA). Two additional real-time PCR assays for lytA and ply described previously for pneumococcal DNA detection were also evaluated. A panel of isolates consisting of 67 S. pneumoniae isolates (44 different serotypes and 3 nonencapsulated S. pneumoniae isolates from conjunctivitis outbreaks) and 104 nonpneumococcal isolates was used. The 67 S. pneumoniae isolates were reactive in all five assays. The new real-time detection assays targeting the lytA and psaA genes were the most specific for the detection of isolates confirmed to be S. pneumoniae, with lytA-CDC showing the greatest specificity. Both ply PCRs were positive for all isolates of S. pseudopneumoniae, along with 13 other isolates of other P-LVS isolates confirmed to be non-S. pneumoniae by DNA-DNA reassociation. Thus, the use of the ply gene for the detection of pneumococci can lead to false-positive reactions in the presence of P-LVS. The five assays were applied to 15 culture-positive cerebrospinal fluid specimens with 100% sensitivity; and serum and ear fluid specimens were also evaluated. Both the lytA-CDC and psaA assays, particularly the lytA-CDC assay, have improved specificities compared with those of currently available assays and should therefore be considered the assays of choice for the detection of pneumococcal DNA

  15. Evaluation and Improvement of Real-Time PCR Assays Targeting lytA, ply, and psaA Genes for Detection of Pneumococcal DNA▿

    Science.gov (United States)

    Carvalho, Maria da Gloria S.; Tondella, Maria Lucia; McCaustland, Karen; Weidlich, Luciana; McGee, Lesley; Mayer, Leonard W.; Steigerwalt, Arnold; Whaley, Melissa; Facklam, Richard R.; Fields, Barry; Carlone, George; Ades, Edwin W.; Dagan, Ron; Sampson, Jacquelyn S.

    2007-01-01

    The accurate diagnosis of pneumococcal disease has frequently been hampered not only by the difficulties in obtaining isolates of the organism from patient specimens but also by the misidentification of pneumococcus-like viridans group streptococci (P-LVS) as Streptococcus pneumoniae. This is especially critical when the specimen comes from the respiratory tract. In this study, three novel real-time PCR assays designed for the detection of specific sequence regions of the lytA, ply, and psaA genes were developed (lytA-CDC, ply-CDC, and psaA, respectively). These assays showed high sensitivity (<10 copies for lytA-CDC and ply-CDC and an approximately twofold less sensitivity for psaA). Two additional real-time PCR assays for lytA and ply described previously for pneumococcal DNA detection were also evaluated. A panel of isolates consisting of 67 S. pneumoniae isolates (44 different serotypes and 3 nonencapsulated S. pneumoniae isolates from conjunctivitis outbreaks) and 104 nonpneumococcal isolates was used. The 67 S. pneumoniae isolates were reactive in all five assays. The new real-time detection assays targeting the lytA and psaA genes were the most specific for the detection of isolates confirmed to be S. pneumoniae, with lytA-CDC showing the greatest specificity. Both ply PCRs were positive for all isolates of S. pseudopneumoniae, along with 13 other isolates of other P-LVS isolates confirmed to be non-S. pneumoniae by DNA-DNA reassociation. Thus, the use of the ply gene for the detection of pneumococci can lead to false-positive reactions in the presence of P-LVS. The five assays were applied to 15 culture-positive cerebrospinal fluid specimens with 100% sensitivity; and serum and ear fluid specimens were also evaluated. Both the lytA-CDC and psaA assays, particularly the lytA-CDC assay, have improved specificities compared with those of currently available assays and should therefore be considered the assays of choice for the detection of pneumococcal DNA

  16. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  17. On Improving Accuracy of Finite-Element Solutions of the Effective-Mass Schrödinger Equation for Interdiffused Quantum Wells and Quantum Wires

    Science.gov (United States)

    Topalović, D. B.; Arsoski, V. V.; Pavlović, S.; Čukarić, N. A.; Tadić, M. Ž.; Peeters, F. M.

    2016-01-01

    We use the Galerkin approach and the finite-element method to numerically solve the effective-mass Schrödinger equation. The accuracy of the solution is explored as it varies with the range of the numerical domain. The model potentials are those of interdiffused semiconductor quantum wells and axially symmetric quantum wires. Also, the model of a linear harmonic oscillator is considered for comparison reasons. It is demonstrated that the absolute error of the electron ground state energy level exhibits a minimum at a certain domain range, which is thus considered to be optimal. This range is found to depend on the number of mesh nodes N approximately as α0 logeα1(α2N), where the values of the constants α0, α1, and α2 are determined by fitting the numerical data. And the optimal range is found to be a weak function of the diffusion length. Moreover, it was demonstrated that a domain range adaptation to the optimal value leads to substantial improvement of accuracy of the solution of the Schrödinger equation. Supported by the Ministry of Education, Science, and Technological Development of Serbia and the Flemish fund for Scientific Research (FWO Vlaanderen)

  18. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    Science.gov (United States)

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  19. A strategy for multivariate calibration based on modified single-index signal regression: Capturing explicit non-linearity and improving prediction accuracy

    Science.gov (United States)

    Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun

    2013-11-01

    In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.

  20. Improvement of the BALB/c-3T3 cell transformation assay: a tool for investigating cancer mechanisms and therapies.

    Science.gov (United States)

    Poburski, Doerte; Thierbach, René

    2016-01-01

    The identification of cancer preventive or therapeutic substances as well as carcinogenic risk assessment of chemicals is nowadays mostly dependent on animal studies. In vitro cell transformation assays mimic different stages of the in vivo neoplastic process and represent an excellent alternative to study carcinogenesis and therapeutic options. In the BALB/c-3T3 two-stage transformation assay cells are chemically transformed by treatment with MCA and TPA, along with the final Giemsa staining of morphological aberrant foci. In addition to the standard method we can show, that it is possible to apply other chemicals in parallel to identify potential preventive or therapeutic substances during the transformation process. Furthermore, we successfully combined the BALB/c cell transformation assay with several endpoint applications for protein analysis (immunoblot, subcellular fractionation and immunofluorescence) or energy parameter measurements (glucose and oxygen consumption) to elucidate cancer mechanisms in more detail. In our opinion the BALB/c cell transformation assay proves to be an excellent model to investigate alterations in key proteins or energy parameters during the different stages of transformation as well as therapeutic substances and their mode of action.

  1. Improvement of the BALB/c-3T3 cell transformation assay: a tool for investigating cancer mechanisms and therapies

    Science.gov (United States)

    Poburski, Doerte; Thierbach, René

    2016-01-01

    The identification of cancer preventive or therapeutic substances as well as carcinogenic risk assessment of chemicals is nowadays mostly dependent on animal studies. In vitro cell transformation assays mimic different stages of the in vivo neoplastic process and represent an excellent alternative to study carcinogenesis and therapeutic options. In the BALB/c-3T3 two-stage transformation assay cells are chemically transformed by treatment with MCA and TPA, along with the final Giemsa staining of morphological aberrant foci. In addition to the standard method we can show, that it is possible to apply other chemicals in parallel to identify potential preventive or therapeutic substances during the transformation process. Furthermore, we successfully combined the BALB/c cell transformation assay with several endpoint applications for protein analysis (immunoblot, subcellular fractionation and immunofluorescence) or energy parameter measurements (glucose and oxygen consumption) to elucidate cancer mechanisms in more detail. In our opinion the BALB/c cell transformation assay proves to be an excellent model to investigate alterations in key proteins or energy parameters during the different stages of transformation as well as therapeutic substances and their mode of action. PMID:27611302

  2. Improvement of the BALB/c-3T3 cell transformation assay: a tool for investigating cancer mechanisms and therapies.

    Science.gov (United States)

    Poburski, Doerte; Thierbach, René

    2016-01-01

    The identification of cancer preventive or therapeutic substances as well as carcinogenic risk assessment of chemicals is nowadays mostly dependent on animal studies. In vitro cell transformation assays mimic different stages of the in vivo neoplastic process and represent an excellent alternative to study carcinogenesis and therapeutic options. In the BALB/c-3T3 two-stage transformation assay cells are chemically transformed by treatment with MCA and TPA, along with the final Giemsa staining of morphological aberrant foci. In addition to the standard method we can show, that it is possible to apply other chemicals in parallel to identify potential preventive or therapeutic substances during the transformation process. Furthermore, we successfully combined the BALB/c cell transformation assay with several endpoint applications for protein analysis (immunoblot, subcellular fractionation and immunofluorescence) or energy parameter measurements (glucose and oxygen consumption) to elucidate cancer mechanisms in more detail. In our opinion the BALB/c cell transformation assay proves to be an excellent model to investigate alterations in key proteins or energy parameters during the different stages of transformation as well as therapeutic substances and their mode of action. PMID:27611302

  3. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    Science.gov (United States)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  4. The practical method of improve earthquake forecast accuracy by MSDP software%MSDP软件提高地震速报质量

    Institute of Scientific and Technical Information of China (English)

    苏莉华; 赵晖; 李源; 魏玉霞

    2012-01-01

    Select the records of Henan digital seismic network within the network and outside the network (the sidelines within 100 km) of seismic events from 2008 to 2011. Analysis and comparison those records by MSDP software, and coordinate with the daily experience, generalize the practical method of improve earthquake forecast accuracy.%选取2008-2011年河南数字地震台网记录的网内和网外(边线外100 km以内)的地震事件,运用MSDP软件对这些震例进行实际分析对比,再结合日常的工作经验,从而归纳出提高地震速报质量的实用方法.

  5. Revelation and Improvement on the Research of Accuracy of Witness Testimony%证人证言准确性研究的启示及完善

    Institute of Scientific and Technical Information of China (English)

    陈欢

    2012-01-01

    In order to make the research of accuracy of witness testimony more targeted and applicable in our country,the article states that it can be discussed through two aspects: one is to combine the research methods between natural science and humanities and therefore improve the external efficiency of experimental study;the other is to look for broad psychological sources through local culture and dominant values.%为了使我国证人证言准确性研究更具有针对性和适用性,文章认为可从两方面进行探讨,一是以自然科学与人文科学研究方式的结合,提高实验研究的外部效度;二是从本土文化和主导价值取向寻找广阔的心理来源。

  6. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp..

    Directory of Open Access Journals (Sweden)

    Cameron R Turner

    Full Text Available Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp., an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  7. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp.).

    Science.gov (United States)

    Turner, Cameron R; Miller, Derryl J; Coyne, Kathryn J; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  8. Improved Methods for Capture, Extraction, and Quantitative Assay of Environmental DNA from Asian Bigheaded Carp (Hypophthalmichthys spp.)

    Science.gov (United States)

    Turner, Cameron R.; Miller, Derryl J.; Coyne, Kathryn J.; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species. PMID:25474207

  9. VLSI Architecture for 8-Point AI-based Arai DCT having Low Area-Time Complexity and Power at Improved Accuracy

    Directory of Open Access Journals (Sweden)

    Jithra Adikari

    2012-03-01

    Full Text Available A low complexity digital VLSI architecture for the computation of an algebraic integer (AI based 8-point Arai DCT algorithm is proposed. AI encoding schemes for exact representation of the Arai DCT transform based on a particularly sparse 2-D AI representation is reviewed, leading to the proposed novel architecture based on a new final reconstruction step (FRS having lower complexity and higher accuracy compared to the state-of-the-art. This FRS is based on an optimization derived from expansion factors that leads to small integer constant-coefficient multiplications, which are realized with common sub-expression elimination (CSE and Booth encoding. The reference circuit [1] as well as the proposed architectures for two expansion factors α† = 4.5958 and α′ = 167.2309 are implemented. The proposed circuits show 150% and 300% improvements in the number of DCT coefficients having error ≤ 0:1% compared to [1]. The three designs were realized using both 40 nm CMOS Xilinx Virtex-6 FPGAs and synthesized using 65 nm CMOS general purpose standard cells from TSMC. Post synthesis timing analysis of 65 nm CMOS realizations at 900 mV for all three designs of the 8-point DCT core for 8-bit inputs show potential real-time operation at 2.083 GHz clock frequency leading to a combined throughput of 2.083 billion 8-point Arai DCTs per second. The expansion-factor designs show a 43% reduction in area (A and 29% reduction in dynamic power (PD for FPGA realizations. An 11% reduction in area is observed for the ASIC design for α† = 4.5958 for an 8% reduction in total power (PT . Our second ASIC design having α′ = 167.2309 shows marginal improvements in area and power compared to our reference design but at significantly better accuracy.

  10. IMPROVEMENT OF ACCURACY OF RADIATIVE HEAT TRANSFER DIFFERENTIAL APPROXIMATION METHOD FOR MULTI DIMENSIONAL SYSTEMS BY MEANS OF AUTO-ADAPTABLE BOUNDARY CONDITIONS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The  efficiency  of  the  named  method  is  proved  for  the  test  2D-systems.  Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.

  11. Diagnostic accuracy of real-time PCR assays targeting 16S rRNA and lipL32 genes for human leptospirosis in Thailand: a case-control study.

    Directory of Open Access Journals (Sweden)

    Janjira Thaipadungpanit

    Full Text Available BACKGROUND: Rapid PCR-based tests for the diagnosis of leptospirosis can provide information that contributes towards early patient management, but these have not been adopted in Thailand. Here, we compare the diagnostic sensitivity and specificity of two real-time PCR assays targeting rrs or lipL32 for the diagnosis of leptospirosis in northeast Thailand. METHODS/PRINCIPAL FINDINGS: A case-control study of 266 patients (133 cases of leptospirosis and 133 controls was constructed to evaluate the diagnostic sensitivity and specificity (DSe & DSp of both PCR assays. The median duration of illness prior to admission of cases was 4 days (IQR 2-5 days; range 1-12 days. DSe and DSp were determined using positive culture and/or microscopic agglutination test (MAT as the gold standard. The DSe was higher for the rrs assay than the lipL32 assay (56%, (95% CI 47-64% versus 43%, (95% CI 34-52%, p<0.001. No cases were positive for the lipL32 assay alone. There was borderline evidence to suggest that the DSp of the rrs assay was lower than the lipL32 assay (90% (95% CI 83-94% versus 93%, (95%CI 88-97%, p = 0.06. Nine controls gave positive reactions for both assays and 5 controls gave a positive reaction for the rrs assay alone. The DSe of the rrs and lipL32 assays were high in the subgroup of 39 patients who were culture positive for Leptospira spp. (95% and 87%, respectively, p = 0.25. CONCLUSIONS/SIGNIFICANCE: Early detection of Leptospira using PCR is possible for more than half of patients presenting with leptospirosis and could contribute to individual patient care.

  12. Improved high-throughput virus neutralisation assay for antibody estimation against pandemic and seasonal influenza strains from 2009 to 2011.

    Science.gov (United States)

    Terletskaia-Ladwig, Elena; Meier, Silvia; Enders, Martin

    2013-05-01

    An automatable focus-reduction neutralisation test (AFRNT) for detecting influenza neutralising antibodies in serum was developed. The assay used immunoperoxidase staining and automated foci counting with AID Diagnostika ViruSpot software. Human serum samples (n=108) were collected before and after vaccination with Pandemrix or Begrivac and were tested by AFRNT and a haemagglutination inhibition assay (HI) using seasonal and pandemic influenza vaccine strains from 2009 to 2011. Much attention has been given to the factors that influence detection of neutralising titre, such as viral quantification and the use of receptor destroying enzyme (RDE) for serum treatment. Foci counting enabled precise virus quantification and the development of a highly sensitive assay. Pre-treatment of the human sera with RDE significantly reduced the neutralising titres against all strains, with the exception of the seasonal H1N1 (2009/2010) strain. An HI titre of 1:40, which is associated with a 50% clinical protection against influenza, was equivalent to an AFRNT titre of 1:100-1:200. In conclusion, the AFRNT is rapid, highly sensitive, and fully automatable; therefore, this test is perfectly suitable for the high-throughput detection of influenza-neutralising antibodies. PMID:23518398

  13. Improved Method for Rotational Accuracy of Flexure Hinges%柔性铰链的转动精度改进法

    Institute of Scientific and Technical Information of China (English)

    裴旭; 宗光华; 于靖军

    2013-01-01

    传统切口型铰链在转动时存在转动中心的漂移,如果柔性铰链转动角度不是很大,铰链的转动中心在沿轴向的位移远小于在垂直轴线方向的位移时,则可对柔性铰链转动中心漂移模型进行简化.提出了将连接两刚体的两个切口型铰链正交放置且让转动中心轴线重合,以便抑制柔性铰链转动中心漂移的设计方法.引σ了平面虚拟转动中心运动机构,实现了铰链之间的虚拟交叉约束.使用此方法可以对不同切口形状的铰链进行改进,以提高铰链的转动精度.设计了圆弧切口和直角切口的组合铰链,通过有限元仿真与传统形式的铰链进行了比较,仿真结果证明了这种方法的有效性.%The traditional incision type hinge's rotation pivot had center-shift, when it rotated. When the rotation angle of a flexure hinge was small, the axial displacement of the hinge's rotation pivot was far less than the perpendicular displacement. The model of center-shift of flexure hinges was simplified. A way that cross two hinges which connect two rigid body and put the hinges' rotation center on a common axis to restrain the center-shift was brought forward. The planar VCM (Virtual Center of Motion) mechanism was utilized to realize a virtual cross restriction of hinges. By this way, different hinges could be improved, to improve the rotation accuracy. Two flexure structures which respectively consist of leaf and circle hinges were constructed, and compared with traditional flexure hinges by the finite element analysis software. The analysis results show that the method can improve the rotation accuracy.

  14. Individualized nomogram improves diagnos-tic accuracy of stage I-II gallbladder cancer in chronic cholecystitis patients with gallbladder wall thickening

    Institute of Scientific and Technical Information of China (English)

    Di Zhou; Jian-Dong Wang; Yong Yang; Wen-Long Yu; Yong-Jie Zhang; Zhi-Wei Quan

    2016-01-01

    BACKGROUND: Early diagnosis of gallbladder cancer (GBC) can remarkably improve the prognosis of patients. This study aimed to develop a nomogram for individualized diagnosis of stage I-II GBC in chronic cholecystitis patients with gallblad-der wall thickening. METHODS: The nomogram was developed using logistic re-gression analyses based on a retrospective cohort consisting of 89 consecutive patients with stage I-II GBC and 1240 patients with gallbladder wall thickening treated at one biliary surgery center in Shanghai between January 2009 and December 2011. The accuracy of the nomogram was validated by discrimina-tion, calibration and a prospective cohort treated at another center between January 2012 and December 2014 (n=928). RESULTS: Factors included in the nomogram were advanced age, hazardous alcohol consumption, long-standing diagnosed gallstones, atrophic gallbladder, gallbladder wall calciifcation, intraluminal polypoid lesion, higher wall thickness ratio and mucosal line disruption. The nomogram had concordance indices of 0.889 and 0.856 for the two cohorts, respectively. Internal and external calibration curves iftted well. The area under the receiver-operating characteristic curves of the no-mogram was higher than that of multidetector row computed tomography in diagnosis of stage I-II GBC (P CONCLUSION: The proposed nomogram improves individu-alized diagnosis of stage I-II GBC in chronic cholecystitis pa-tients with gallbladder wall thickening, especially for those the imaging features alone do not allow to conifrm the diagnosis.

  15. Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural network.

    Science.gov (United States)

    Faraggi, Eshel; Xue, Bin; Zhou, Yaoqi

    2009-03-01

    This article attempts to increase the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins through improved learning. Most methods developed for improving the backpropagation algorithm of artificial neural networks are limited to small neural networks. Here, we introduce a guided-learning method suitable for networks of any size. The method employs a part of the weights for guiding and the other part for training and optimization. We demonstrate this technique by predicting residue solvent accessibility and real-value backbone torsion angles of proteins. In this application, the guiding factor is designed to satisfy the intuitive condition that for most residues, the contribution of a residue to the structural properties of another residue is smaller for greater separation in the protein-sequence distance between the two residues. We show that the guided-learning method makes a 2-4% reduction in 10-fold cross-validated mean absolute errors (MAE) for predicting residue solvent accessibility and backbone torsion angles, regardless of the size of database, the number of hidden layers and the size of input windows. This together with introduction of two-layer neural network with a bipolar activation function leads to a new method that has a MAE of 0.11 for residue solvent accessibility, 36 degrees for psi, and 22 degrees for phi. The method is available as a Real-SPINE 3.0 server in http://sparks.informatics.iupui.edu.

  16. Improving the accuracy of ground-state correlation energies within a plane-wave basis set: The electron-hole exchange kernel.

    Science.gov (United States)

    Dixit, Anant; Ángyán, János G; Rocca, Dario

    2016-09-14

    A new formalism was recently proposed to improve random phase approximation (RPA) correlation energies by including approximate exchange effects [B. Mussard et al., J. Chem. Theory Comput. 12, 2191 (2016)]. Within this framework, by keeping only the electron-hole contributions to the exchange kernel, two approximations can be obtained: An adiabatic connection analog of the second order screened exchange (AC-SOSEX) and an approximate electron-hole time-dependent Hartree-Fock (eh-TDHF). Here we show how this formalism is suitable for an efficient implementation within the plane-wave basis set. The response functions involved in the AC-SOSEX and eh-TDHF equations can indeed be compactly represented by an auxiliary basis set obtained from the diagonalization of an approximate dielectric matrix. Additionally, the explicit calculation of unoccupied states can be avoided by using density functional perturbation theory techniques and the matrix elements of dynamical response functions can be efficiently computed by applying the Lanczos algorithm. As shown by several applications to reaction energies and weakly bound dimers, the inclusion of the electron-hole kernel significantly improves the accuracy of ground-state correlation energies with respect to RPA and semi-local functionals. PMID:27634249

  17. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    International Nuclear Information System (INIS)

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10–18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, −35, and −94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues. (paper)

  18. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  19. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    International Nuclear Information System (INIS)

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm3 (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  20. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    Science.gov (United States)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  1. A novel computer-assisted image analysis of [{sup 123}I]{beta}-CIT SPECT images improves the diagnostic accuracy of parkinsonian disorders

    Energy Technology Data Exchange (ETDEWEB)

    Goebel, Georg [Innsbruck Medical University, Department of Medical Statistics, Informatics and Health Economics, Innsbruck (Austria); Seppi, Klaus; Wenning, Gregor K.; Poewe, Werner; Scherfler, Christoph [Innsbruck Medical University, Department of Neurology, Innsbruck (Austria); Donnemiller, Eveline; Warwitz, Boris; Virgolini, Irene [Innsbruck Medical University, Department of Nuclear Medicine, Innsbruck (Austria)

    2011-04-15

    The purpose of this study was to develop an observer-independent algorithm for the correct classification of dopamine transporter SPECT images as Parkinson's disease (PD), multiple system atrophy parkinson variant (MSA-P), progressive supranuclear palsy (PSP) or normal. A total of 60 subjects with clinically probable PD (n = 15), MSA-P (n = 15) and PSP (n = 15), and 15 age-matched healthy volunteers, were studied with the dopamine transporter ligand [{sup 123}I]{beta}-CIT. Parametric images of the specific-to-nondisplaceable equilibrium partition coefficient (BP{sub ND}) were generated. Following a voxel-wise ANOVA, cut-off values were calculated from the voxel values of the resulting six post-hoc t-test maps. The percentages of the volume of an individual BP{sub ND} image remaining below and above the cut-off values were determined. The higher percentage of image volume from all six cut-off matrices was used to classify an individual's image. For validation, the algorithm was compared to a conventional region of interest analysis. The predictive diagnostic accuracy of the algorithm in the correct assignment of a [{sup 123}I]{beta}-CIT SPECT image was 83.3% and increased to 93.3% on merging the MSA-P and PSP groups. In contrast the multinomial logistic regression of mean region of interest values of the caudate, putamen and midbrain revealed a diagnostic accuracy of 71.7%. In contrast to a rater-driven approach, this novel method was superior in classifying [{sup 123}I]{beta}-CIT-SPECT images as one of four diagnostic entities. In combination with the investigator-driven visual assessment of SPECT images, this clinical decision support tool would help to improve the diagnostic yield of [{sup 123}I]{beta}-CIT SPECT in patients presenting with parkinsonism at their initial visit. (orig.)

  2. 15种血清总蛋白检测方法测定结果准确性和可比性研究%Method comparison and accuracy of 15 commercial serum total protein assays

    Institute of Scientific and Technical Information of China (English)

    曾洁; 任思楣; 汪静; 张传宝; 张江涛; 赵海建; 刘倩; 张天娇; 闫颖; 周伟燕

    2015-01-01

    Objective To evaluate the difference of Doumas′method and 15 commercial serum total protein ( TP ) methods based on EP9-A3.Methods Serum panels were quantified for TP with Doumas′method and measured in parallel with 15 commercial methods.The linear regression analyses were performed, followed by calculating relative deviation and 95%CI between commercial method and Doumas′method at three different medical decision levels (45 g/L, 60 g/L, 80 g/L).We also calculated relative deviation, 95% limit of agreement ( LoA ) and 95% CI based on classical and improved Bland-Altman method at three different medical decision levels.If both the relative deviation and 95%CI were within 5%, we conside red the commercial serum total protein method was comparable to Doumas′method.Results (1) All assays presented high correlation ( r>0.975, P<0.001) with the Doumas′method.All assays showed that the relative deviations and 95%CIs were within the biological total error goal (5%) at medical decision levels based on regression analysis.(2) Based on classical and improved Bland-Altman method, fourteen of 15 commercial methods showed that the relative deviations and 95%CIs were within +/-5%. Conclusions All commercial assays are comparable to Doumas′method at medical deviation levels.There is no difference between regression analysis and Bland-Altman method for comparison study.%目的:依据EP9-A3评价常规检测方法测定血清总蛋白的准确性,掌握EP9-A3建议的回归分析法与Bland-Altman法对血清总蛋白可比性结果判定的差异。方法方法评价性研究。收集46人份血清,分成4套血清盘,分别用于15种常规检测方法和参考方法。按EP9-A3建议的回归分析法计算常规检测方法与参考方法间的相关系数和直线回归方程,并以直线方程计算医学决定水平下(45、60、80 g/L)的相对偏差和95%可信区间。按经典Bland-Altman法或Carstensen改良Bland-Altman法计

  3. Combining pseudo dinucleotide composition with the Z curve method to improve the accuracy of predicting DNA elements: a case study in recombination spots.

    Science.gov (United States)

    Dong, Chuan; Yuan, Ya-Zhou; Zhang, Fa-Zhan; Hua, Hong-Li; Ye, Yuan-Nong; Labena, Abraham Alemayehu; Lin, Hao; Chen, Wei; Guo, Feng-Biao

    2016-08-16

    Pseudo dinucleotide composition (PseDNC) and Z curve showed excellent performance in the classification issues of nucleotide sequences in bioinformatics. Inspired by the principle of Z curve theory, we improved PseDNC to give the phase-specific PseDNC (psPseDNC). In this study, we used the prediction of recombination spots as a case to illustrate the capability of psPseDNC and also PseDNC fused with Z curve theory based on a novel machine learning method named large margin distribution machine (LDM). We verified that combining the two widely used approaches could generate better performance compared to only using PseDNC with a support vector machine based (SVM-based) model. The best Mathew's correlation coefficient (MCC) achieved by our LDM-based model was 0.7037 through the rigorous jackknife test and improved by ∼6.6%, ∼3.2%, and ∼2.4% compared with three previous studies. Similarly, the accuracy was improved by 3.2% compared with our previous iRSpot-PseDNC web server through an independent data test. These results demonstrate that the joint use of PseDNC and Z curve enhances performance and can extract more information from a biological sequence. To facilitate research in this area, we constructed a user-friendly web server for predicting hot/cold spots, HcsPredictor, which can be freely accessed from . In summary, we provided a united algorithm by integrating Z curve with PseDNC. We hope this united algorithm could be extended to other classification issues in DNA elements. PMID:27410247

  4. Combining pseudo dinucleotide composition with the Z curve method to improve the accuracy of predicting DNA elements: a case study in recombination spots.

    Science.gov (United States)

    Dong, Chuan; Yuan, Ya-Zhou; Zhang, Fa-Zhan; Hua, Hong-Li; Ye, Yuan-Nong; Labena, Abraham Alemayehu; Lin, Hao; Chen, Wei; Guo, Feng-Biao

    2016-08-16

    Pseudo dinucleotide composition (PseDNC) and Z curve showed excellent performance in the classification issues of nucleotide sequences in bioinformatics. Inspired by the principle of Z curve theory, we improved PseDNC to give the phase-specific PseDNC (psPseDNC). In this study, we used the prediction of recombination spots as a case to illustrate the capability of psPseDNC and also PseDNC fused with Z curve theory based on a novel machine learning method named large margin distribution machine (LDM). We verified that combining the two widely used approaches could generate better performance compared to only using PseDNC with a support vector machine based (SVM-based) model. The best Mathew's correlation coefficient (MCC) achieved by our LDM-based model was 0.7037 through the rigorous jackknife test and improved by ∼6.6%, ∼3.2%, and ∼2.4% compared with three previous studies. Similarly, the accuracy was improved by 3.2% compared with our previous iRSpot-PseDNC web server through an independent data test. These results demonstrate that the joint use of PseDNC and Z curve enhances performance and can extract more information from a biological sequence. To facilitate research in this area, we constructed a user-friendly web server for predicting hot/cold spots, HcsPredictor, which can be freely accessed from . In summary, we provided a united algorithm by integrating Z curve with PseDNC. We hope this united algorithm could be extended to other classification issues in DNA elements.

  5. Significant Improvement of Puncture Accuracy and Fluoroscopy Reduction in Percutaneous Transforaminal Endoscopic Discectomy With Novel Lumbar Location System: Preliminary Report of Prospective Hello Study.

    Science.gov (United States)

    Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng

    2015-12-01

    Prospective nonrandomized control study.The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED).Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory.The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic times, and the secondary endpoint was location time and operation time.A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture times were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P HELLO system is accurate preoperative location and definite trajectory. This preliminary report indicated that the HELLO system significantly improves the puncture accuracy of PTED and reduces the fluoroscopic times, preoperative location time, as well as operation time. (ChiCTR-ICR-15006730).

  6. Significant Improvement of Puncture Accuracy and Fluoroscopy Reduction in Percutaneous Transforaminal Endoscopic Discectomy With Novel Lumbar Location System: Preliminary Report of Prospective Hello Study.

    Science.gov (United States)

    Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng

    2015-12-01

    Prospective nonrandomized control study.The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED).Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory.The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic times, and the secondary endpoint was location time and operation time.A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture times were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P HELLO system is accurate preoperative location and definite trajectory. This preliminary report indicated that the HELLO system significantly improves the puncture accuracy of PTED and reduces the fluoroscopic times, preoperative location time, as well as operation time. (ChiCTR-ICR-15006730). PMID:26656348

  7. Improved diagnostic PCR assay for Actinobacillus pleuropneumoniae based on the nucleotide sequence of an outer membrane lipoprotein

    DEFF Research Database (Denmark)

    Gram, Trine; Ahrens, Peter

    1998-01-01

    . pleuropneumoniae obtained from lungs. Their identity was verified by sequencing approximately 500 bp of the amplification product from 50 of the A. pleuropneumoniae isolates, which all showed the expected DNA sequence characteristic of the serotype, To test the specificity of the reaction, 23 other bacterial......, Alignment of the sequences revealed conserved terminal and variable middle regions, which divided the reference strains into four distinct groups. Primers were selected from the conserved 5' and 3' termini of the gene, A 950-bp amplicon was obtained from each of 102 tested field isolates of A...... species related to A. pleuropneumoniae or isolated from pigs were assayed. They were all found negative in the PCR, as were tonsil cultures from 50 pigs of an A. pleuropneumoniae-negative herd. The sensitivity assessed by agarose gel analysis of the PCR product was 10(2) CFU/PCR test tube. The specificity...

  8. Fast T2 Mapping with Improved Accuracy Using Undersampled Spin-echo MRI and Model-based Reconstructions with a Generating Function

    CERN Document Server

    Sumpf, Tilman J; Uecker, Martin; Knoll, Florian; Frahm, Jens

    2014-01-01

    A model-based reconstruction technique for accelerated T2 mapping with improved accuracy is proposed using undersampled Cartesian spin-echo MRI data. The technique employs an advanced signal model for T2 relaxation that accounts for contributions from indirect echoes in a train of multiple spin echoes. An iterative solution of the nonlinear inverse reconstruction problem directly estimates spin-density and T2 maps from undersampled raw data. The algorithm is validated for simulated data as well as phantom and human brain MRI at 3 T. The performance of the advanced model is compared to conventional pixel-based fitting of echo-time images from fully sampled data. The proposed method yields more accurate T2 values than the mono-exponential model and allows for undersampling factors of at least 6. Although limitations are observed for very long T2 relaxation times, respective reconstruction problems may be overcome by a gradient dampening approach. The analytical gradient of the utilized cost function is included...

  9. Improvements in medical CT image reconstruction accuracy in the presence of metal objects by using x-rays up to 1 MeV

    Science.gov (United States)

    Clayton, James; Virshup, Gary; Yang, Ming; Mohan, Radhe; Dong, Lei

    2009-03-01

    The use of flat panels based on amorphous silicon technology (a-Si) for digital radiography has been accepted by the medical community as having advantages over film-based systems. Radiation treatment planning employs computed tomographic (CT) data sets and projection images to delineate tumor targets and normal structures that are to be spared from radiation treatment. The accuracy of CT numbers is crucial for radiotherapy dose calculations in general but is even more important for charged particle therapy. Conventional CT scanners operating at kilovoltage X-ray energies typically exhibit significant image reconstruction artifacts in the presence of metal implants in human body. We demonstrate a significant improvement in metal artifact reductions and electron density measurements using an amorphous silicon a-Si imager obtained with an X-ray source that can operate at energies up to 1 MeV. The data collected with the higher energy system will be compared and contrasted to CT results obtained at standard kilovoltage energies.

  10. Using DRASTIC'' to improve the accuracy of a geographical information system used for solid waste disposal facility siting: A case study

    Energy Technology Data Exchange (ETDEWEB)

    Padgett, D.A. (Univ. of Florida, Gainesville, FL (United States). Dept. of Geography)

    1993-01-01

    Beginning in 1989, the citizens and commissioners of Alachua County, Florida began to develop a siting plan for a new solid waste disposal facility (SWDF). Through a cooperative effort with a private consulting firm, several evaluative criteria were selected and then translated into parameters for a geographical information system (GIS). Despite efforts to avoid vulnerable hydrogeology, the preferred site selected was in close proximity to the well field supplying Gainesville, Florida, home to approximately 75 percent of the county's population. The results brought forth a wave of protests from local residents claiming that leachate from the proposed SWDF would contaminate their drinking water. In this study, DRASTIC'' was applied in order to improve the accuracy and defensibility of the aquifer protection-based GIS parameters. DRASTIC'', a method for evaluating ground water contamination potential, is an acronym which stands for Depth to Water, Net Recharge, Aquifer Media, Soil Media, Topography, Impact of Vadose Zone Media, and Conductivity (Hydraulic)''.

  11. Improvement of echo state network accuracy with Adaboost%基于Adaboost算法的回声状态网络预报器

    Institute of Scientific and Technical Information of China (English)

    韩敏; 穆大芸

    2011-01-01

    把单个回声状态网络(echo state network,ESN)的预测模型作改进,对整体ESN预测精度的提高是有限的.针对以上问题,本文考虑整体ESN.首先利用Adaboost算法提升单个ESN的泛化性能及预测精度,并且根据Adaboost算法的结果,建立一种ESN预报器(Adaboost ESN,ABESN).这个ESN预报器根据拟合误差不断修正训练样本的权重,拟合误差越大,训练样本权重值就越大;因此,它在下一次迭代时,就会侧重在难以学习的样本.把单个ESN的预测模型经过加权,然后按照加法组合在一起,形成最终的ESN预测模型.将该预测模型应用于太阳黑子、Mackey-Glass时间序列的预测研究.仿真结果表明所提出的预测模型在实际时间序列预测领域的有效性.%Modifying the prediction model of individual echo state network(ESN) improves the total prediction result with limited extent. To solve this probiem, we consider an ensemble of ESN. The general performance and prediction accuracy of each individual ESN is boosted by using the Adaboost algorithm. Based on the Adaboost algorithm results,we develop an ESN predictor(ABESN). In this predictor, the weights of training samples are constantly adjusted according to the fitting error, the greater the fitting error, the heavier the weights for the training samples. Therefore, the ESN predictor will focus on the hard-learning samples in the next iteration cycle. The prediction models of individual ESN are weighted and added up to form the final predictor of the ensemble of ESN. The presented model is tested on the benchmark prediction problem of Mackey-Glass time series as well as the time series of sunspots. Simulation results demonstrate its high prediction accuracy and effectiveness.

  12. Stage-specific adhesion of Leishmania promastigotes to sand fly midguts assessed using an improved comparative binding assay.

    Directory of Open Access Journals (Sweden)

    Raymond Wilson

    Full Text Available BACKGROUND: The binding of Leishmania promastigotes to the midgut epithelium is regarded as an essential part of the life-cycle in the sand fly vector, enabling the parasites to persist beyond the initial blood meal phase and establish the infection. However, the precise nature of the promastigote stage(s that mediate binding is not fully understood. METHODOLOGY/PRINCIPAL FINDINGS: To address this issue we have developed an in vitro gut binding assay in which two promastigote populations are labelled with different fluorescent dyes and compete for binding to dissected sand fly midguts. Binding of procyclic, nectomonad, leptomonad and metacyclic promastigotes of Leishmania infantum and L. mexicana to the midguts of blood-fed, female Lutzomyia longipalpis was investigated. The results show that procyclic and metacyclic promastigotes do not bind to the midgut epithelium in significant numbers, whereas nectomonad and leptomonad promastigotes both bind strongly and in similar numbers. The assay was then used to compare the binding of a range of different parasite species (L. infantum, L. mexicana, L. braziliensis, L. major, L. tropica to guts dissected from various sand flies (Lu. longipalpis, Phlebotomus papatasi, P. sergenti. The results of these comparisons were in many cases in line with expectations, the natural parasite binding most effectively to its natural vector, and no examples were found where a parasite was unable to bind to its natural vector. However, there were interesting exceptions: L. major and L. tropica being able to bind to Lu. longipalpis better than L. infantum; L. braziliensis was able to bind to P. papatasi as well as L. major; and significant binding of L. major to P. sergenti and L. tropica to P. papatasi was observed. CONCLUSIONS/SIGNIFICANCE: The results demonstrate that Leishmania gut binding is strictly stage-dependent, is a property of those forms found in the middle phase of development (nectomonad and leptomonad

  13. The cytotoxicity of polycationic iron oxide nanoparticles: Common endpoint assays and alternative approaches for improved understanding of cellular response mechanism

    Directory of Open Access Journals (Sweden)

    Hoskins Clare

    2012-04-01

    Full Text Available Abstract Background Iron oxide magnetic nanoparticles (MNP's have an increasing number of biomedical applications. As such in vitro characterisation is essential to ensure the bio-safety of these particles. Little is known on the cellular interaction or effect on membrane integrity upon exposure to these MNPs. Here we synthesised Fe3O4 and surface coated with poly(ethylenimine (PEI and poly(ethylene glycol (PEG to achieve particles of varying surface positive charges and used them as model MNP's to evaluate the relative utility and limitations of cellular assays commonly applied for nanotoxicity assessment. An alternative approach, atomic force microscopy (AFM, was explored for the analysis of membrane structure and cell morphology upon interacting with the MNPs. The particles were tested in vitro on human SH-SY5Y, MCF-7 and U937 cell lines for reactive oxygen species (ROS production and lipid peroxidation (LPO, LDH leakage and their overall cytotoxic effect. These results were compared with AFM topography imaging carried out on fixed cell lines. Results Successful particle synthesis and coating were characterised using FTIR, PCS, TEM and ICP. The particle size from TEM was 30 nm (−16.9 mV which increased to 40 nm (+55.6 mV upon coating with PEI and subsequently 50 nm (+31.2 mV with PEG coating. Both particles showed excellent stability not only at neutral pH but also in acidic environment of pH 4.6 in the presence of sodium citrate. The higher surface charge MNP-PEI resulted in increased cytotoxic effect and ROS production on all cell lines compared with the MNP-PEI-PEG. In general the effect on the cell membrane integrity was observed only in SH-SY5Y and MCF-7 cells by MNP-PEI determined by LDH leakage and LPO production. AFM topography images showed consistently that both the highly charged MNP-PEI and the less charged MNP-PEI-PEG caused cell morphology changes possibly due to membrane disruption and cytoskeleton remodelling. Conclusions

  14. Improved dose calculation accuracy for low energy brachytherapy by optimizing dual energy CT imaging protocols for noise reduction using sinogram affirmed iterative reconstruction.

    Science.gov (United States)

    Landry, Guillaume; Gaudreault, Mathieu; van Elmpt, Wouter; Wildberger, Joachim E; Verhaegen, Frank

    2016-03-01

    The goal of this study was to evaluate the noise reduction achievable from dual energy computed tomography (CT) imaging (DECT) using filtered backprojection (FBP) and iterative image reconstruction algorithms combined with increased imaging exposure. We evaluated the data in the context of imaging for brachytherapy dose calculation, where accurate quantification of electron density ρe and effective atomic number Zeff is beneficial. A dual source CT scanner was used to scan a phantom containing tissue mimicking inserts. DECT scans were acquired at 80 kVp/140Sn kVp (where Sn stands for tin filtration) and 100 kVp/140Sn kVp, using the same values of the CT dose index CTDIvol for both settings as a measure for the radiation imaging exposure. Four CTDIvol levels were investigated. Images were reconstructed using FBP and sinogram affirmed iterative reconstruction (SAFIRE) with strength 1,3 and 5. From DECT scans two material quantities were derived, Zeff and ρe. DECT images were used to assign material types and the amount of improperly assigned voxels was quantified for each protocol. The dosimetric impact of improperly assigned voxels was evaluated with Geant4 Monte Carlo (MC) dose calculations for an (125)I source in numerical phantoms. Standard deviations for Zeff and ρe were reduced up to a factor ∼2 when using SAFIRE with strength 5 compared to FBP. Standard deviations on Zeff and ρe as low as 0.15 and 0.006 were achieved for the muscle insert representing typical soft tissue using a CTDIvol of 40 mGy and 3mm slice thickness. Dose calculation accuracy was generally improved when using SAFIRE. Mean (maximum absolute) dose errors of up to 1.3% (21%) with FBP were reduced to less than 1% (6%) with SAFIRE at a CTDIvol of 10 mGy. Using a CTDIvol of 40mGy and SAFIRE yielded mean dose calculation errors of the order of 0.6% which was the MC dose calculation precision in this study and no error was larger than ±2.5% as opposed to errors of up to -4% with FPB. This

  15. Improved dose calculation accuracy for low energy brachytherapy by optimizing dual energy CT imaging protocols for noise reduction using sinogram affirmed iterative reconstruction

    International Nuclear Information System (INIS)

    The goal of this study was to evaluate the noise reduction achievable from dual energy computed tomography (CT) imaging (DECT) using filtered backprojection (FBP) and iterative image reconstruction algorithms combined with increased imaging exposure. We evaluated the data in the context of imaging for brachytherapy dose calculation, where accurate quantification of electron density ρe and effective atomic number Zeff is beneficial. A dual source CT scanner was used to scan a phantom containing tissue mimicking inserts. DECT scans were acquired at 80 kVp/140Sn kVp (where Sn stands for tin filtration) and 100 kVp/140Sn kVp, using the same values of the CT dose index CTDIvol for both settings as a measure for the radiation imaging exposure. Four CTDIvol levels were investigated. Images were reconstructed using FBP and sinogram affirmed iterative reconstruction (SAFIRE) with strength 1,3 and 5. From DECT scans two material quantities were derived, Zeff and ρe. DECT images were used to assign material types and the amount of improperly assigned voxels was quantified for each protocol. The dosimetric impact of improperly assigned voxels was evaluated with Geant4 Monte Carlo (MC) dose calculations for an 125I source in numerical phantoms. Standard deviations for Zeff and ρe were reduced up to a factor ∝2 when using SAFIRE with strength 5 compared to FBP. Standard deviations on Zeff and ρe as low as 0.15 and 0.006 were achieved for the muscle insert representing typical soft tissue using a CTDIvol of 40 mGy and 3 mm slice thickness. Dose calculation accuracy was generally improved when using SAFIRE. Mean (maximum absolute) dose errors of up to 1.3% (21%) with FBP were reduced to less than 1% (6%) with SAFIRE at a CTDIvol of 10 mGy. Using a CTDIvol of 40mGy and SAFIRE yielded mean dose calculation errors of the order of 0.6% which was the MC dose calculation precision in this study and no error was larger than ±2.5% as opposed to errors of up to -4% with FPB. This

  16. Quantitation of minimal disease levels in chronic lymphocytic leukemia using a sensitive flow cytometric assay improves the prediction of outcome and can be used to optimize therapy.

    Science.gov (United States)

    Rawstron, A C; Kennedy, B; Evans, P A; Davies, F E; Richards, S J; Haynes, A P; Russell, N H; Hale, G; Morgan, G J; Jack, A S; Hillmen, P

    2001-07-01

    Previous studies have suggested that the level of residual disease at the end of therapy predicts outcome in chronic lymphocytic leukemia (CLL). However, available methods for detecting CLL cells are either insensitive or not routinely applicable. A flow cytometric assay was developed that can differentiate CLL cells from normal B cells on the basis of their CD19/CD5/CD20/CD79b expression. The assay is rapid and can detect one CLL cell in 10(4) to 10(5) leukocytes in all patients. We have compared this assay to conventional assessment in 104 patients treated with CAMPATH-1H and/or autologous transplant. During CAMPATH-1H therapy, circulating CLL cells were rapidly depleted in responding patients, but remained detectable in nonresponders. Patients with more than 0.01 x 10(9)/L circulating CLL cells always had significant (> 5%) marrow disease, and blood monitoring could be used to time marrow assessments. In 25 out of 104 patients achieving complete remission by National Cancer Institute (NCI) criteria, the detection of residual bone marrow disease at more than 0.05% of leukocytes in 6 out of 25 patients predicted significantly poorer event-free (P =.0001) and overall survival (P =.007). CLL cells are detectable at a median of 15.8 months (range, 5.5-41.8) posttreatment in 9 out of 18 evaluable patients with less than 0.05% CLL cells at end of treatment. All patients with detectable disease have progressively increasing disease levels on follow-up. The use of sensitive techniques, such as the flow assay described here, allow accurate quantitation of disease levels and provide an accurate method for guiding therapy and predicting outcome. These results suggest that the eradication of detectable disease may lead to improved survival and should be tested in future studies.

  17. Two Methods to Derive Ground-level Concentrations of PM2.5 with Improved Accuracy in the North China, Calibrating MODIS AOD and CMAQ Model Predictions

    Science.gov (United States)

    Lyu, Baolei; Hu, Yongtao; Chang, Howard; Russell, Armistead; Bai, Yuqi

    2016-04-01

    Reliable and accurate characterizations of ground-level PM2.5 concentrations are essential to understand pollution sources and evaluate human exposures etc. Monitoring network could only provide direct point-level observations at limited locations. At the locations without monitors, there are generally two ways to estimate the pollution levels of PM2.5. One is observations of aerosol properties from the satellite-based remote sensing, such as Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD). The other one is from deterministic atmospheric chemistry models, such as the Community Multi-Scale Air Quality Model (CMAQ). In this study, we used a statistical spatio-temporal downscaler to calibrate the two datasets to monitor observations to derive fine-scale ground-level concentrations of PM2.5 with improved accuracy. We treated both MODIS AOD and CMAQ model predictions as biased proxy estimations of PM2.5 pollution levels. The downscaler proposed a Bayesian framework to model the spatially and temporally varying coefficients of the two types of estimations in the linear regression setting, in order to correct biases. Especially for calibrating MODIS AOD, a city-specific linear model was established to fill the missing AOD values, and a novel interpolation-based variable, i.e. PM2.5 Spatial Interpolator, was introduced to account for the spatial dependence among grid cells. We selected the heavy polluted and populated North China as our study area, in a grid setting of 81×81 12-km cells. For the evaluation of calibration performance for retrieved MODIS AOD, the R2 was 0.61 by the full model with PM2.5 Spatial Interpolator being presented, and was 0.48 with PM2.5 Spatial Interpolator not being presented. The constructed AOD values effectively predicted PM2.5 concentrations under our model structure, with R2=0.78. For the evaluation of calibrated CMAQ predictions, the R2 was 0.51, a little less than that of calibrated AOD. Finally we

  18. Dynamic Performance Comparison of Two Kalman Filters for Rate Signal Direct Modeling and Differencing Modeling for Combining a MEMS Gyroscope Array to Improve Accuracy.

    Science.gov (United States)

    Yuan, Guangmin; Yuan, Weizheng; Xue, Liang; Xie, Jianbing; Chang, Honglong

    2015-10-30

    In this paper, the performance of two Kalman filter (KF) schemes based on the direct estimated model and differencing estimated model for input rate signal was thoroughly analyzed and compared for combining measurements of a sensor array to improve the accuracy of microelectromechanical system (MEMS) gyroscopes. The principles for noise reduction were presented and KF algorithms were designed to obtain the optimal rate signal estimates. The input rate signal in the direct estimated KF model was modeled with a random walk process and treated as the estimated system state. In the differencing estimated KF model, a differencing operation was established between outputs of the gyroscope array, and then the optimal estimation of input rate signal was achieved by compensating for the estimations of bias drifts for the component gyroscopes. Finally, dynamic simulations and experiments with a six-gyroscope array were implemented to compare the dynamic performance of the two KF models. The 1σ error of the gyroscopes was reduced from 1.4558°/s to 0.1203°/s by the direct estimated KF model in a constant rate test and to 0.5974°/s by the differencing estimated KF model. The estimated rate signal filtered by both models could reflect the amplitude variation of the input signal in the swing rate test and displayed a reduction factor of about three for the 1σ noise. Results illustrate that the performance of the direct estimated KF model is much higher than that of the differencing estimated KF model, with a constant input signal or lower dynamic variation. A similarity in the two KFs' performance is observed if the input signal has a high dynamic variation.

  19. Dynamic Performance Comparison of Two Kalman Filters for Rate Signal Direct Modeling and Differencing Modeling for Combining a MEMS Gyroscope Array to Improve Accuracy

    Directory of Open Access Journals (Sweden)

    Guangmin Yuan

    2015-10-01

    Full Text Available In this paper, the performance of two Kalman filter (KF schemes based on the direct estimated model and differencing estimated model for input rate signal was thoroughly analyzed and compared for combining measurements of a sensor array to improve the accuracy of microelectromechanical system (MEMS gyroscopes. The principles for noise reduction were presented and KF algorithms were designed to obtain the optimal rate signal estimates. The input rate signal in the direct estimated KF model was modeled with a random walk process and treated as the estimated system state. In the differencing estimated KF model, a differencing operation was established between outputs of the gyroscope array, and then the optimal estimation of input rate signal was achieved by compensating for the estimations of bias drifts for the component gyroscopes. Finally, dynamic simulations and experiments with a six-gyroscope array were implemented to compare the dynamic performance of the two KF models. The 1σ error of the gyroscopes was reduced from 1.4558°/s to 0.1203°/s by the direct estimated KF model in a constant rate test and to 0.5974°/s by the differencing estimated KF model. The estimated rate signal filtered by both models could reflect the amplitude variation of the input signal in the swing rate test and displayed a reduction factor of about three for the 1σ noise. Results illustrate that the performance of the direct estimated KF model is much higher than that of the differencing estimated KF model, with a constant input signal or lower dynamic variation. A similarity in the two KFs’ performance is observed if the input signal has a high dynamic variation.

  20. SU-D-19A-01: Can Farmer-Type Ionization Chambers Be Used to Improve the Accuracy of Low-Energy Electron Beam Reference Dosimetry?

    International Nuclear Information System (INIS)

    Purpose: To investigate the use of cylindrical Farmer-type ionization chambers to improve the accuracy of low-energy electron beam calibration. Historically, these chamber types have not been used in beams with incident energies less than 10 MeV (R50 < 4.3 cm) because early investigations suggested large (up to 5 %) fluence perturbation factors in these beams, implying that a significant component of uncertainty would be introduced if used for calibration. More recently, the assumptions used to determine perturbation corrections for cylindrical chambers have been questioned. Methods: Measurements are made with cylindrical chambers in Elekta Precise 4, 8 and 18 MeV electron beams. Several chamber types are investigated that employ graphite walls and aluminum electrodes with very similar specifications (NE2571, NE2505/3, FC65-G). Depth-ionization scans are measured in water in the 8 and 18 MeV beams. To reduce uncertainty from chamber positioning, measurements in the 4 MeV beam are made at the reference depth in Virtual Water™. The variability of perturbation factors is quantified by comparing normalized response of various chambers. Results: Normalized ion chamber response varies by less than 0.7 % for similar chambers at average electron energies corresponding to that at the reference depth from 4 or 6 MeV beams. Similarly, normalized measurements made with similar chambers at the reference depth in the 4 MeV beam vary by less than 0.4 %. Absorbed dose calibration coefficients derived from these results are stable within 0.1 % on average over a period of 6 years. Conclusion: These results indicate that the uncertainty associated with differences in fluence perturbations for cylindrical chambers with similar specifications is only 0.2 %. The excellent long-term stability of these chambers in both photon and electron beams suggests that these chambers might offer the best performance for all reference dosimetry applications

  1. Dynamic Performance Comparison of Two Kalman Filters for Rate Signal Direct Modeling and Differencing Modeling for Combining a MEMS Gyroscope Array to Improve Accuracy.

    Science.gov (United States)

    Yuan, Guangmin; Yuan, Weizheng; Xue, Liang; Xie, Jianbing; Chang, Honglong

    2015-01-01

    In this paper, the performance of two Kalman filter (KF) schemes based on the direct estimated model and differencing estimated model for input rate signal was thoroughly analyzed and compared for combining measurements of a sensor array to improve the accuracy of microelectromechanical system (MEMS) gyroscopes. The principles for noise reduction were presented and KF algorithms were designed to obtain the optimal rate signal estimates. The input rate signal in the direct estimated KF model was modeled with a random walk process and treated as the estimated system state. In the differencing estimated KF model, a differencing operation was established between outputs of the gyroscope array, and then the optimal estimation of input rate signal was achieved by compensating for the estimations of bias drifts for the component gyroscopes. Finally, dynamic simulations and experiments with a six-gyroscope array were implemented to compare the dynamic performance of the two KF models. The 1σ error of the gyroscopes was reduced from 1.4558°/s to 0.1203°/s by the direct estimated KF model in a constant rate test and to 0.5974°/s by the differencing estimated KF model. The estimated rate signal filtered by both models could reflect the amplitude variation of the input signal in the swing rate test and displayed a reduction factor of about three for the 1σ noise. Results illustrate that the performance of the direct estimated KF model is much higher than that of the differencing estimated KF model, with a constant input signal or lower dynamic variation. A similarity in the two KFs' performance is observed if the input signal has a high dynamic variation. PMID:26528980

  2. SU-D-19A-01: Can Farmer-Type Ionization Chambers Be Used to Improve the Accuracy of Low-Energy Electron Beam Reference Dosimetry?

    Energy Technology Data Exchange (ETDEWEB)

    Muir, B R; McEwen, M R [Measurement Science and Standards, National Research Council, Ottawa, ON (Canada)

    2014-06-01

    Purpose: To investigate the use of cylindrical Farmer-type ionization chambers to improve the accuracy of low-energy electron beam calibration. Historically, these chamber types have not been used in beams with incident energies less than 10 MeV (R{sub 5} {sub 0} < 4.3 cm) because early investigations suggested large (up to 5 %) fluence perturbation factors in these beams, implying that a significant component of uncertainty would be introduced if used for calibration. More recently, the assumptions used to determine perturbation corrections for cylindrical chambers have been questioned. Methods: Measurements are made with cylindrical chambers in Elekta Precise 4, 8 and 18 MeV electron beams. Several chamber types are investigated that employ graphite walls and aluminum electrodes with very similar specifications (NE2571, NE2505/3, FC65-G). Depth-ionization scans are measured in water in the 8 and 18 MeV beams. To reduce uncertainty from chamber positioning, measurements in the 4 MeV beam are made at the reference depth in Virtual Water™. The variability of perturbation factors is quantified by comparing normalized response of various chambers. Results: Normalized ion chamber response varies by less than 0.7 % for similar chambers at average electron energies corresponding to that at the reference depth from 4 or 6 MeV beams. Similarly, normalized measurements made with similar chambers at the reference depth in the 4 MeV beam vary by less than 0.4 %. Absorbed dose calibration coefficients derived from these results are stable within 0.1 % on average over a period of 6 years. Conclusion: These results indicate that the uncertainty associated with differences in fluence perturbations for cylindrical chambers with similar specifications is only 0.2 %. The excellent long-term stability of these chambers in both photon and electron beams suggests that these chambers might offer the best performance for all reference dosimetry applications.

  3. Non-Destructive Assay (NDA) Uncertainties Impact on Physical Inventory Difference (ID) and Material Balance Determination: Sources of Error, Precision/Accuracy, and ID/Propagation of Error (POV)

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-10

    These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.

  4. Improved rapid molecular diagnosis of multidrug-resistant tuberculosis using a new reverse hybridization assay, REBA MTB-MDR

    OpenAIRE

    Bang, Hyeeun; Park, Sangjung; Hwang, Joohwan; Jin, Hyunwoo; Cho, Eunjin; Kim, Dae Yoon; Song, Taeksun; Shamputa, Isdore Chola; Via, Laura E.; Barry, Clifton E.; Cho, Sang-Nae; Lee, Hyeyoung

    2011-01-01

    Rapid diagnosis of multidrug-resistant tuberculosis (MDR-TB) is essential for the prompt initiation of effective second-line therapy to improve treatment outcome and limit transmission of this obstinate disease. A variety of molecular methods that enable the rapid detection of mutations implicated in MDR-TB have been developed. The sensitivity of the methods is dependent, in principle, on the repertoire of mutations being detected, which is typically limited to mutations in the genes rpoB, ka...

  5. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    Science.gov (United States)

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods. PMID:25358108

  6. Improved dose calculation accuracy for low energy brachytherapy by optimizing dual energy CT imaging protocols for noise reduction using sinogram affirmed iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Landry, Guillaume [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); Munich Univ. (Germany). Dept. of Medical Physics; Gaudreault, Mathieu [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); Laval Univ., QC (Canada). Dept. de Radio-Oncologie et Centre de Recherche en Cancerologie; Laval Univ., QC (Canada). Dept. de Physique, de Genie Physique et d' Optique; Elmpt, Wouter van [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); Wildberger, Joachim E. [Maastricht University Medical Center (Netherlands). Dept. of Radiology; Verhaegen, Frank [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); McGill Univ. Montreal, QC (Canada). Dept. of Oncology

    2016-05-01

    The goal of this study was to evaluate the noise reduction achievable from dual energy computed tomography (CT) imaging (DECT) using filtered backprojection (FBP) and iterative image reconstruction algorithms combined with increased imaging exposure. We evaluated the data in the context of imaging for brachytherapy dose calculation, where accurate quantification of electron density ρ{sub e} and effective atomic number Z{sub eff} is beneficial. A dual source CT scanner was used to scan a phantom containing tissue mimicking inserts. DECT scans were acquired at 80 kVp/140Sn kVp (where Sn stands for tin filtration) and 100 kVp/140Sn kVp, using the same values of the CT dose index CTDI{sub vol} for both settings as a measure for the radiation imaging exposure. Four CTDI{sub vol} levels were investigated. Images were reconstructed using FBP and sinogram affirmed iterative reconstruction (SAFIRE) with strength 1,3 and 5. From DECT scans two material quantities were derived, Z{sub eff} and ρ{sub e}. DECT images were used to assign material types and the amount of improperly assigned voxels was quantified for each protocol. The dosimetric impact of improperly assigned voxels was evaluated with Geant4 Monte Carlo (MC) dose calculations for an {sup 125}I source in numerical phantoms. Standard deviations for Z{sub eff} and ρ{sub e} were reduced up to a factor ∝2 when using SAFIRE with strength 5 compared to FBP. Standard deviations on Z{sub eff} and ρ{sub e} as low as 0.15 and 0.006 were achieved for the muscle insert representing typical soft tissue using a CTDI{sub vol} of 40 mGy and 3 mm slice thickness. Dose calculation accuracy was generally improved when using SAFIRE. Mean (maximum absolute) dose errors of up to 1.3% (21%) with FBP were reduced to less than 1% (6%) with SAFIRE at a CTDI{sub vol} of 10 mGy. Using a CTDI{sub vol} of 40mGy and SAFIRE yielded mean dose calculation errors of the order of 0.6% which was the MC dose calculation precision in this study and

  7. Improving accuracy of Tay Sachs carrier screening of the non-Jewish population: analysis of 34 carriers and six late-onset patients with HEXA enzyme and DNA sequence analysis.

    Science.gov (United States)

    Park, Noh Jin; Morgan, Craig; Sharma, Rajesh; Li, Yuanyin; Lobo, Raynah M; Redman, Joy B; Salazar, Denise; Sun, Weimin; Neidich, Julie A; Strom, Charles M

    2010-02-01

    The purpose of this study was to determine whether combining different testing modalities namely beta-hexosaminidase A (HEXA) enzyme analysis, HEXA DNA common mutation assay, and HEXA gene sequencing could improve the sensitivity for carrier detection in non-Ashkenazi (AJ) individuals. We performed a HEXA gene sequencing assay, a HEXA DNA common mutation assay, and a HEXA enzyme assay on 34 self-reported Tay-Sachs disease (TSD) carriers, six late-onset patients with TSD, and one pseudodeficiency allele carrier. Sensitivity of TSD carrier detection was 91% for gene sequencing compared with 91% for the enzyme assay and 52% for the DNA mutation assay. Gene sequencing combined with enzyme testing had the highest sensitivity (100%) for carrier detection. Gene sequencing detected four novel mutations, three of which are predicted to be disease causing [118.delT, 965A-->T (D322V), and 775A-->G (T259A)]. Gene sequencing is useful in identifying rare mutations in patients with TSD and their families, in evaluating spouses of known carriers for TSD who have indeterminate enzyme analysis and negative for common mutation analysis, and in resolving ambiguous enzyme testing results. PMID:19858779

  8. Improving immunogenicity, efficacy and safety of vaccines through innovation in clinical assay development and trial design: the Phacilitate Vaccine Forum, Washington D.C. 2011.

    Science.gov (United States)

    Moldovan, Ioana R; Tary-Lehmann, Magdalena

    2011-06-01

    The 9th Annual Vaccine Forum organized by Phacilitate in Washington D.C. 2011 brought together 50+ senior level speakers and over 400 participants representing all the key stakeholders concerning vaccines. The main focus of the meeting was to define priorities in the global vaccines sector from funding to manufacturing and evaluation of vaccine efficacy. A special session was devoted to improving immunogenicity, efficacy and safety of vaccines through innovation in clinical assay development and trial design. The current regulatory approach to clinical assay specification, validation and standardization that enable more direct comparisons of efficacy between trials was illustrated by the success in meningococcal vaccine development. The industry approach to validation strategies was exemplified by a new serologic test used on the diagnostic of pneumococcal pneumonia. The application of the Animal Rule to bridge clinical and non-clinical studies in botulism has allowed significant progress in developing one of the first vaccines to seek approval under the FDA Animal Efficacy Rule. An example of pushing the boundaries in the correlation of immunological responses and efficacy points was represented by a recent cell-based influenza vaccine for which the same correlates of protection apply as for the traditional, egg-based flue vaccine. In the field of HIV phase 2b studies are underway, based on promising results obtained with some vaccine candidates. The conclusion of this session was that creativity in vaccine design and evaluation is beneficial and can lead to innovative new vaccine designs as well as to validated assays to assess vaccine efficacy.

  9. Is furosemide administration effective in improving the accuracy of determination of differential renal function by means of technetium-99m DMSA in patients with hydronephrosis

    Energy Technology Data Exchange (ETDEWEB)

    Kabasakal, Levent; Turkmen, Cuneyt; Ozmen, Ozlem; Alan, Nalan; Onsel, Cetin; Uslu, Ilhami [Department of Nuclear Medicine, Cerrahpasa Medical Faculty, Aksaray Istanbul, 34303 (Turkey)

    2002-11-01

    activity during the second phase of the study) (P>0.1). In conclusion, we did not observe interference from pelvicalyceal activity in patients with documented pelvic retention and infer that diuretic administration may be a useless intervention for improving the accuracy of determination of DRF. (orig.)

  10. Evaluation of PCR based assays for the improvement of proportion estimation of bacterial and viral pathogens in diarrheal surveillance

    Directory of Open Access Journals (Sweden)

    Hongxia eGuan

    2016-03-01

    Full Text Available AbstractDiarrhea can be caused by a variety of bacterial, viral and parasitic organisms. Laboratory diagnosis is essential in the pathogen-specific burden assessment. In the pathogen spectrum monitoring in the diarrheal surveillance, culture methods are commonly used for the bacterial pathogens’ detection whereas nucleic acid based amplification, the non-cultural methods are used for the viral pathogens. Different methodology may cause the inaccurate pathogen spectrum for the bacterial pathogens because of their different culture abilities with the different media, and for the comparison of bacterial vs. viral pathogens. The application of nucleic acid-based methods in the detection of viral and bacterial pathogens will likely increase the number of confirmed positive diagnoses, and will be comparable since all pathogens will be detected based on the same nucleic acid extracts from the same sample. In this study, bacterial pathogens, including diarrheagenic Escherichia coli (DEC, Salmonella spp., Shigella spp., Vibrio parahaemolyticus and V. cholerae, were detected in 334 diarrheal samples by PCR-based methods using nucleic acid extracted from stool samples and associated enrichment cultures. A protocol was established to facilitate the consistent identification of bacterial pathogens in diarrheal patients. Five common enteric viruses were also detected by RT-PCR, including rotavirus, sapovirus, norovirus (I and II, human astrovirus, and enteric adenovirus. Higher positive rates were found for the bacterial pathogens, showing the lower proportion estimation if only using culture methods. This application will improve the quality of bacterial diarrheagenic pathogen survey, providing more accurate information pertaining to the pathogen spectrum associated with finding of food safety problems and disease burden evaluation.

  11. The APTIMA HPV assay versus the Hybrid Capture 2 test in triage of women with ASC-US or LSIL cervical cytology: a meta-analysis of the diagnostic accuracy.

    Science.gov (United States)

    Arbyn, Marc; Roelens, Jolien; Cuschieri, Kate; Cuzick, Jack; Szarewski, Ann; Ratnam, Sam; Reuschenbach, Miriam; Belinson, Suzanne; Belinson, Jerome L; Monsonego, Joseph

    2013-01-01

    Testing for DNA of 13 high-risk HPV types with the Hybrid Capture 2 (HC2) test has consistently been shown to perform better in triage of women with cervical cytology results showing atypical squamous cells of undetermined significance (ASC-US) but often not in triage of low-grade squamous intraepithelial lesions (LSIL) detected in cervical cancer screening. In a meta-analysis, we compared the accuracy of the APTIMA HPV test, which identifies RNA of 14 high-risk HPV types, to HC2 for the triage of women with ASC-US or LSIL. Literature search-targeted studies where the accuracy of APTIMA HPV and HC2 for detection of underlying CIN2/3+ was assessed concomitantly including verification of all cases of ASC-US and LSIL. HSROC (Hierarchical Summary ROC) curve regression was used to compute the pooled absolute and relative sensitivity and specificity. Eight studies, comprising 1,839 ASC-US and 1,887 LSIL cases, were retrieved. The pooled sensitivity and specificity of APTIMA to triage ASC-US to detect underlying CIN3 or worse was 96.2% (95% CI = 91.7-98.3%) and 54.9% (95% CI = 43.5-65.9%), respectively. APTIMA and HC2 showed similar pooled sensitivity; however, the specificity of the former was significantly higher (ratio: 1.19; 95% CI = 1.08-1.31 for CIN2+). The pooled sensitivity and specificity of APTIMA to triage LSIL were 96.7% (95% CI = 91.4-98.9%) and 38.7% (95% CI = 30.5-47.6%) for CIN3+. APTIMA was as sensitive as HC2 but more specific (ratio: 1.35; 95% CI = 1.11-1.66). Results were similar for detection of CIN2 or worse. In both triage of ASC-US and LSIL, APTIMA is as sensitive but more specific than HC2 for detecting cervical precancer.

  12. 提高自动站观测数据准确性的探讨%Improvement on data accuracy of automatic weather station

    Institute of Scientific and Technical Information of China (English)

    吴非洋

    2012-01-01

    本文从两种不同观测方法的观测结果差异产生的原因和实际情况入手,对自动气象站的自动观测和台站人工观测这两种观测体制下的观测准确度进行探讨,进而提出确保自动观测