WorldWideScience

Sample records for assay improves accuracy

  1. Implementing a technique to improve the accuracy of shuffler assays of waste drums

    Energy Technology Data Exchange (ETDEWEB)

    Rinard, P.M.

    1996-07-01

    The accuracy of shuffler assays for fissile materials is generally limited by the accuracy of the calibration standards, but when the matrix in a large drum has a sufficiently high hydrogen density (as exists in paper, for example) the accuracy in the active mode can be adversely affected by a nonuniform distribution of the fissile material within the matrix. This paper reports on a technique to determine the distribution nondestructively using delayed neutron signals generated by the shuffler itself. In assays employing this technique, correction factors are applied to the result of the conventional assay according to the distribution. Maximum inaccuracies in assays with a drum of paper, for example, are reduced by a factor of two or three.

  2. Improved RT-PCR Assay to Quantitate the Pri-, Pre-, and Mature microRNAs with Higher Efficiency and Accuracy.

    Science.gov (United States)

    Tong, Li; Xue, Huihui; Xiong, Li; Xiao, Junhua; Zhou, Yuxun

    2015-10-01

    Understanding of the functional significance of microRNAs (miRNAs) requires efficient and accurate detection method. In this study, we developed an improved miRNAs quantification system based on quantitative real-time polymerase chain reaction (qRT-PCR). This method showed higher efficiency and accuracy to survey the expression of primary miRNAs (pri-miRNAs), precursor miRNAs (pre-miRNAs), and mature miRNAs. Instead of relative quantification method, we quantified the pri-miRNAs and pre-miRNAs with absolute qRT-PCR based on SYBR Green I fluorescence. This improvement corrected for the inaccuracy caused by the differences in amplicon length and PCR efficiency. We also used SYBR Green method to quantify mature miRNAs based on the stem-loop qRT-PCR method. We extended the pairing part of the stem-loop reverse transcript (RT) primer from 6 to 11 bp, which greatly increased the efficiency of reverse transcription PCR (RT-PCR). The performance of the improved RT primer was tested using synthetic mature miRNAs and tissue RNA samples. Results showed that the improved RT primer demonstrated dynamic range of seven orders of magnitude and sensitivity of detection of hundreds of copies of miRNA molecules.

  3. Improving Speaking Accuracy through Awareness

    Science.gov (United States)

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  4. Improving accuracy of holes honing

    Directory of Open Access Journals (Sweden)

    Ivan М. Buykli

    2015-03-01

    Full Text Available Currently, in precision engineering industry tolerances for linear dimensions and tolerances on shape of surfaces of processing parts are steadily tightened These requirements are especially relevant in processing of holes. Aim of the research is to improve accuracy and to enhance the technological capabilities of holes honing process and, particularly, of blind holes honing. Based on formal logic the analysis of formation of processing errors is executed on the basis of consideration of schemes of irregularity of dimensional wear and tear along the length of the cutting elements. With this, the possibilities of compensating this irregularities and, accordingly, of control of accuracy of processing applied to the honing of both throughout and blind holes are specified. At the same time, a new method of honing is developed, it is protected by the patent of Ukraine for invention. The method can be implemented both on an existing machine tools at insignificant modernization of its system of processing cycle control and on newly designed ones.

  5. Why do delayed summaries improve metacomprehension accuracy?

    Science.gov (United States)

    Anderson, Mary C M; Thiede, Keith W

    2008-05-01

    We showed that metacomprehension accuracy improved when participants (N=87 college students) wrote summaries of texts prior to judging their comprehension; however, accuracy only improved when summaries were written after a delay, not when written immediately after reading. We evaluated two hypotheses proposed to account for this delayed-summarization effect (the accessibility hypothesis and the situation model hypothesis). The data suggest that participants based metacomprehension judgments more on the gist of texts when they generated summaries after a delay; whereas, they based judgments more on details when they generated summaries immediately after reading. Focusing on information relevant to the situation model of a text (the gist of a text) produced higher levels of metacomprehension accuracy, which is consistent with situation model hypothesis.

  6. Algorithms for improving accuracy of spray simulation

    Institute of Scientific and Technical Information of China (English)

    ZHANG HuiYa; ZHANG YuSheng; XIAO HeLin; XU Bo

    2007-01-01

    Fuel spray is the pivotal process of direct injection engine combustion. The accuracy of spray simulation determines the reliability of combustion calculation. However, the traditional techniques of spray simulation in KIVA and commercial CFD codes are very susceptible to grid resolution. As a consequence, predicted engine performance and emission can depend on the computational mesh. The two main causes of this problem are the droplet collision algorithm and coupling between gas and liquid phases. In order to improve the accuracy of spray simulation, the original KIVA code is modified using the cross mesh droplet collision (CMC) algorithm and gas phase velocity interpolation algorithm. In the constant volume apparatus and D.I. Diesel engine, the improvements of the modified KIVA code in spray simulation accuracy are checked from spray structure, predicted average drop size and spray tip penetration, respectively. The results show a dramatic decrease in grid dependency. With these changes, the distorted phenomenon of spray structure is vanished. The uncertainty in predicted average drop size is reduced from 30 to 5 μm in constant volume apparatus calculation, and the uncertainty is further reduced to 2 μm in an engine simulation. The predicted spray tip penetrations in engine simulation also have better consistency in medium and fine meshes.

  7. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  8. Improving the accuracy of death certification

    Science.gov (United States)

    Myers, K A; Farquhar, D R

    1998-01-01

    BACKGROUND: Population-based mortality statistics are derived from the information recorded on death certificates. This information is used for many important purposes, such as the development of public health programs and the allocation of health care resources. Although most physicians are confronted with the task of completing death certificates, many do not receive adequate training in this skill. Resulting inaccuracies in information undermine the quality of the data derived from death certificates. METHODS: An educational intervention was designed and implemented to improve internal medicine residents' accuracy in death certificate completion. A total of 229 death certificates (146 completed before and 83 completed after the intervention) were audited for major and minor errors, and the rates of errors before and after the intervention were compared. RESULTS: Major errors were identified on 32.9% of the death certificates completed before the intervention, a rate comparable to previously reported rates for internal medicine services in teaching hospitals. Following the intervention the major error rate decreased to 15.7% (p = 0.01). The reduction in the major error rate was accounted for by significant reductions in the rate of listing of mechanism of death without a legitimate underlying cause of death (15.8% v. 4.8%) (p = 0.01) and the rate of improper sequencing of death certificate information (15.8% v. 6.0%) (p = 0.03). INTERPRETATION: Errors are common in the completion of death certificates in the inpatient teaching hospital setting. The accuracy of death certification can be improved with the implementation of a simple educational intervention. PMID:9614825

  9. IMPROVED ACCURACY AND ROUGHNESS MEASURES FOR ROUGH SETS

    Institute of Scientific and Technical Information of China (English)

    Zhou Yuming; Xu Baowen

    2002-01-01

    Accuracy and roughness, proposed by Pawlak(1982), might draw a conclusion inconsistent with our intuition in some cases. This letter analyzes the limitations in these measures and proposes improved accuracy and roughness measures based on information theory.

  10. Improved internal control for molecular diagnosis assays.

    Science.gov (United States)

    Vinayagamoorthy, T; Maryanski, Danielle; Vinayagamoorthy, Dilanthi; Hay, Katie S L; Yo, Jacob; Carter, Mark; Wiegel, Joseph

    2015-01-01

    The two principal determining steps in molecular diagnosis are the amplification and the identification steps. Accuracy of DNA amplification is primarily determined by the annealing sequence of the PCR primer to the analyte DNA. Accuracy for identification is determined either by the annealing region of a labelled probe for the real time PCR analysis, or the annealing of a sequencing primer for DNA sequencing analysis, that binds to the respective analyte (amplicon). Presently, housekeeping genes (Beta globin, GAPDH) are used in molecular diagnosis to verify that the PCR conditions are optimum, and are thus known as amplification controls [1-4]. Although these genes have been useful as amplification controls, they lack the true definition of an internal control because the primers and annealing conditions are not identical to the analyte being assayed. This may result in a false negative report [5]. The IC-Code platform technology described here provides a true internal control where the internal control and analyte share identical PCR primers annealing sequences for the amplification step and identical sequencing primer annealing sequence for the identification step. •The analyte and internal control have the same PCR and sequencing annealing sequences.•This method assures for little or no false negatives and false positives due to the method's design of using identical annealing conditions for the internal control and analyte, and by using DNA sequencing analysis for the identification step of the analyte, respectively.•This method also allows for a set lower limit of detection to be used by varying the amount of internal control used in the assay.

  11. Improved accuracy in nano beam electron diffraction

    Energy Technology Data Exchange (ETDEWEB)

    Beche, A; Rouviere, J-L [CEA, INAC, SP2M, LEMMA, 17 rue des Martyrs, F-38054 Grenoble Cedex 9 (France); Clement, L, E-mail: armand.beche@cea.f, E-mail: jean-luc.rouviere@cea.f [ST Microelectronics, 850 rue Jean Monnet, F-38920 Crolles (France)

    2010-02-01

    Nano beam electron diffraction (NBD or NBED) is applied on a well controlled sample in order to evaluate the limit of the technique to measure strain. Measurements are realised on a 27nm thick Si{sub 0.7}Ge{sub 0.3} layer embedded in a silicon matrix, with a TITAN microscope working at 300kV. Using a standard condenser aperture of 50{mu}m, a probe size diameter of 2.7 nm is obtained and a strain accuracy of 6x10{sup -4} (mean root square, rms) is achieved. NBED patterns are acquired along a [110] direction and the bidimensionnal strain in the (110) plane is measured. Finite element simulations are carried out to check experimental results and reveal that strain relaxation and probe averaging in a 170nm thick TEM lamella reduces strain by 15%.

  12. Improving Localization Accuracy: Successive Measurements Error Modeling

    Directory of Open Access Journals (Sweden)

    Najah Abu Ali

    2015-07-01

    Full Text Available Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a -order Gauss–Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter.

  13. Improving the accuracy of walking piezo motors.

    Science.gov (United States)

    den Heijer, M; Fokkema, V; Saedi, A; Schakel, P; Rost, M J

    2014-05-01

    Many application areas require ultraprecise, stiff, and compact actuator systems with a high positioning resolution in combination with a large range as well as a high holding and pushing force. One promising solution to meet these conflicting requirements is a walking piezo motor that works with two pairs of piezo elements such that the movement is taken over by one pair, once the other pair reaches its maximum travel distance. A resolution in the pm-range can be achieved, if operating the motor within the travel range of one piezo pair. However, applying the typical walking drive signals, we measure jumps in the displacement up to 2.4 μm, when the movement is given over from one piezo pair to the other. We analyze the reason for these large jumps and propose improved drive signals. The implementation of our new drive signals reduces the jumps to less than 42 nm and makes the motor ideally suitable to operate as a coarse approach motor in an ultra-high vacuum scanning tunneling microscope. The rigidity of the motor is reflected in its high pushing force of 6.4 N.

  14. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    Science.gov (United States)

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  15. Improvement of Electrochemical Machining Accuracy by Using Dual Pole Tool

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Electrochemical machining (ECM) is one of the best al ternatives for producing complex shapes in advanced materials used in aircraft a nd aerospace industries. However, the reduction of the stray material removal co ntinues to be major challenges for industries in addressing accuracy improvement . This study presents a method of improving machining accuracy in ECM by using a dual pole tool with a metallic bush outside the insulated coating of a cathode tool. The bush is connected with anode and so the el...

  16. Improving metacomprehension accuracy in an undergraduate course context.

    Science.gov (United States)

    Wiley, Jennifer; Griffin, Thomas D; Jaeger, Allison J; Jarosz, Andrew F; Cushen, Patrick J; Thiede, Keith W

    2016-12-01

    Students tend to have poor metacomprehension when learning from text, meaning they are not able to distinguish between what they have understood well and what they have not. Although there are a good number of studies that have explored comprehension monitoring accuracy in laboratory experiments, fewer studies have explored this in authentic course contexts. This study investigated the effect of an instructional condition that encouraged comprehension-test-expectancy and self-explanation during study on metacomprehension accuracy in the context of an undergraduate course in research methods. Results indicated that when students received this instructional condition, relative metacomprehension accuracy was better than in a comparison condition. In addition, differences were also seen in absolute metacomprehension accuracy measures, strategic study behaviors, and learning outcomes. The results of the current study demonstrate that a condition that has improved relative metacomprehension accuracy in laboratory contexts may have value in real classroom contexts as well. (PsycINFO Database Record

  17. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    Directory of Open Access Journals (Sweden)

    Harada Hideo

    2015-01-01

    Full Text Available Improvement of accuracy of neutron nuclear data for minor actinides (MAs and long-lived fission products (LLFPs is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as “Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC” has been started as one of the “Innovative Nuclear Research and Development Program” in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  18. Improved benzodiazepine radioreceptor assay using the MultiScreen (R) Assay System

    NARCIS (Netherlands)

    Janssen, MJ; Ensing, K; de Zeeuw, RA

    1999-01-01

    In this article, an improved benzodiazepine radioreceptor assay is described, which allows substantial reduction in assay time, The filtration in this method was performed by using the MultiScreen(R) Assay System. The latter consists of a 96-well plate with glass fibre filters sealed at the bottom,

  19. Using judgement to improve accuracy in decision-making.

    Science.gov (United States)

    Dowding, Dawn; Thompson, Carl

    Nursing judgements are complex, often involving the need to process a large number of information cues. Key issues include how accurate they are and how we can improve levels of accuracy. Traditional approaches to the study of nursing judgement, characterised by qualitative and descriptive research, have provided valuable insights into the nature of expert nursing practice and the complexity of practice. However, they have largely failed to provide the data needed to address judgement accuracy. Social judgement analysis approaches are one way of overcoming these limitations. This paper argues that as nurses take on more roles requiring accurate judgement, it is time to increase our knowledge of judgement and ways to improve it.

  20. Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy

    Science.gov (United States)

    Fukaya, Tatsushi

    2013-01-01

    The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…

  1. A study of the conditions and accuracy of the thrombin time assay of plasma fibrinogen

    DEFF Research Database (Denmark)

    Jespersen, J; Sidelmann, Johannes Jakobsen

    1982-01-01

    The conditions, accuracy, precision and possible error of the thrombin time assay of plasma fibrinogen are determined. Comparison with an estimation of clottable protein by absorbance at 280 nm gave a correlation coefficient of 0.96 and the regression line y = 1.00 x + 0.56 (n = 34). Comparison...... with a radial immunodiffusion method yielded the correlation coefficient 0.97 and the regression line y = 1.18 x = 2.47 (n = 26). The presence of heparin in clinically applied concentrations produced a slight shortening of the clotting times. The resulting error in the estimated concentrations of fibrinogen...

  2. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    Science.gov (United States)

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in developing informed priors and further

  3. The rereading effect: metacomprehension accuracy improves across reading trials.

    Science.gov (United States)

    Rawson, K A; Dunlosky, J; Thiede, K W

    2000-09-01

    Guided by a hypothesis that integrates principles of monitoring from a cue-based framework of metacognitive judgments with assumptions about levels of text representation derived from theories of comprehension, we discovered that rereading improves metacomprehension accuracy. In Experiments 1 and 2, the participants read texts either once or twice, rated their comprehension for each text, and then were tested on the material. In both experiments, correlations between comprehension ratings and test scores were reliably greater for participants who reread texts than for participants who read texts only once. Furthermore, in contrast to the low levels of accuracy typically reported in the literature, rereading produced relatively high levels of accuracy, with the median gamma between ratings and test performance being +.60 across participants from both experiments. Our discussion focuses on two alternative hypotheses--that improved accuracy is an artifact of when judgments are collected or that it results from increased reliability of test performance--and on evidence that is inconsistent with these explanations for the rereading effect.

  4. High accuracy genotyping directly from genomic DNA using a rolling circle amplification based assay

    Directory of Open Access Journals (Sweden)

    Du Yuefen

    2003-05-01

    Full Text Available Abstract Background Rolling circle amplification of ligated probes is a simple and sensitive means for genotyping directly from genomic DNA. SNPs and mutations are interrogated with open circle probes (OCP that can be circularized by DNA ligase when the probe matches the genotype. An amplified detection signal is generated by exponential rolling circle amplification (ERCA of the circularized probe. The low cost and scalability of ligation/ERCA genotyping makes it ideally suited for automated, high throughput methods. Results A retrospective study using human genomic DNA samples of known genotype was performed for four different clinically relevant mutations: Factor V Leiden, Factor II prothrombin, and two hemochromatosis mutations, C282Y and H63D. Greater than 99% accuracy was obtained genotyping genomic DNA samples from hundreds of different individuals. The combined process of ligation/ERCA was performed in a single tube and produced fluorescent signal directly from genomic DNA in less than an hour. In each assay, the probes for both normal and mutant alleles were combined in a single reaction. Multiple ERCA primers combined with a quenched-peptide nucleic acid (Q-PNA fluorescent detection system greatly accellerated the appearance of signal. Probes designed with hairpin structures reduced misamplification. Genotyping accuracy was identical from either purified genomic DNA or genomic DNA generated using whole genome amplification (WGA. Fluorescent signal output was measured in real time and as an end point. Conclusions Combining the optimal elements for ligation/ERCA genotyping has resulted in a highly accurate single tube assay for genotyping directly from genomic DNA samples. Accuracy exceeded 99 % for four probe sets targeting clinically relevant mutations. No genotypes were called incorrectly using either genomic DNA or whole genome amplified sample.

  5. Method for Improving the Ranging Accuracy of Radio Fuze

    Institute of Scientific and Technical Information of China (English)

    HU Xiu-juan; DENG Jia-hao; SANG Hui-ping

    2006-01-01

    Stepped frequency radar waveform is put forward for improving the accuracy of radio fuze ranging. IFFT is adopted to synthesize one dimension high resolution range profile. Furthermore, the same range reject method and selection maximum method are made use of removing target redundancy and the simulation results are given. Characters of the two methods are analyzed, and under the proposal of Weibull distribution clutter envelope, the CFAR same range selection maximum method is adopted and realizes the accurate profile and ranging.

  6. One Advanced Indoor Localization Algorithm for Improving Localization Accuracy

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Chao

    2013-01-01

    The indoor localization technology has very important practical value for real-time monitoring and management of indoor materials.In order to achieve localization for indoor substances,a model of indoor localization algorithm based on distances is established,meanwhile,DFP algorithm is introduced to further refine the positioning coordinates and improve the localization accuracy.The main idea is using the least squares estimation method and cubic spline interpolation to

  7. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  8. Improving Accuracy for Image Fusion in Abdominal Ultrasonography

    Directory of Open Access Journals (Sweden)

    Caroline Ewertsen

    2012-08-01

    Full Text Available Image fusion involving real-time ultrasound (US is a technique where previously recorded computed tomography (CT or magnetic resonance images (MRI are reformatted in a projection to fit the real-time US images after an initial co-registration. The co-registration aligns the images by means of common planes or points. We evaluated the accuracy of the alignment when varying parameters as patient position, respiratory phase and distance from the co-registration points/planes. We performed a total of 80 co-registrations and obtained the highest accuracy when the respiratory phase for the co-registration procedure was the same as when the CT or MRI was obtained. Furthermore, choosing co-registration points/planes close to the area of interest also improved the accuracy. With all settings optimized a mean error of 3.2 mm was obtained. We conclude that image fusion involving real-time US is an accurate method for abdominal examinations and that the accuracy is influenced by various adjustable factors that should be kept in mind.

  9. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    Directory of Open Access Journals (Sweden)

    Kenjiro Fujii

    2016-01-01

    Full Text Available Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled.

  10. APPROACH TO IMPROVEMENT OF ROBOT TRAJECTORY ACCURACY BY DYNAMIC COMPENSATION

    Institute of Scientific and Technical Information of China (English)

    Wang Gang; Ren Guoli; Yan Xiang'an; Wang Guodong

    2004-01-01

    Some dynamic factors, such as inertial forces and friction, may affect the robot trajectory accuracy. But these effects are not taken into account in robot motion control schemes. Dynamic control methods, on the other hand, require the dynamic model of robot and the implementation of new type controller. A method to improve robot trajectory accuracy by dynamic compensation in robot motion control system is proposed. The dynamic compensation is applied as an additional velocity feedforward and a multilayer neural network is employed to realize the robot inverse dynamics. The complicated dynamic parameter identification problem becomes a learning process of neural network connecting weights under supervision. The finite Fourier series is used to activate each actuator of robot joints for obtaining training samples. Robot control system, consisting of an industrial computer and a digital motion controller, is implemented. The system is of open architecture with velocity feedforward function. The proposed method is not model-based and combines the advantages of close-loop position control and computed torque control. Experimental results have shown that the method is validatities to improve the robot trajectory accuracy.

  11. Accuracy improvement in digital holographic microtomography by multiple numerical reconstructions

    Science.gov (United States)

    Ma, Xichao; Xiao, Wen; Pan, Feng

    2016-11-01

    In this paper, we describe a method to improve the accuracy in digital holographic microtomography (DHMT) for measurement of thick samples. Two key factors impairing the accuracy, the deficiency of depth of focus and the rotational error, are considered and addressed simultaneously. The hologram is propagated to a series of distances by multiple numerical reconstructions so as to extend the depth of focus. The correction of the rotational error, implemented by numerical refocusing and image realigning, is merged into the computational process. The method is validated by tomographic results of a four-core optical fiber and a large mode optical crystal fiber. A sample as thick as 258 μm is accurately reconstructed and the quantitative three-dimensional distribution of refractive index is demonstrated.

  12. Accuracy Improvement for Stiffness Modeling of Parallel Manipulators

    CERN Document Server

    Pashkevich, Anatoly; Chablat, Damien; Wenger, Philippe

    2009-01-01

    The paper focuses on the accuracy improvement of stiffness models for parallel manipulators, which are employed in high-speed precision machining. It is based on the integrated methodology that combines analytical and numerical techniques and deals with multidimensional lumped-parameter models of the links. The latter replace the link flexibility by localized 6-dof virtual springs describing both translational/rotational compliance and the coupling between them. There is presented detailed accuracy analysis of the stiffness identification procedures employed in the commercial CAD systems (including statistical analysis of round-off errors, evaluating the confidence intervals for stiffness matrices). The efficiency of the developed technique is confirmed by application examples, which deal with stiffness analysis of translational parallel manipulators.

  13. Simultaneously improving the sensitivity and absolute accuracy of CPT magnetometer.

    Science.gov (United States)

    Liang, Shang-Qing; Yang, Guo-Qing; Xu, Yun-Fei; Lin, Qiang; Liu, Zhi-Heng; Chen, Zheng-Xiang

    2014-03-24

    A new method to improve the sensitivity and absolute accuracy simultaneously for coherent population trapping (CPT) magnetometer based on the differential detection method is presented. Two modulated optical beams with orthogonal circular polarizations are applied, in one of which two magnetic resonances are excited simultaneously by modulating a 3.4GHz microwave with Larmor frequency. When a microwave frequency shift is introduced, the difference in the power transmitted through the cell in each beam shows a low noise resonance. The sensitivity of 2pT/Hz @ 10Hz is achieved. Meanwhile, the absolute accuracy of ± 0.5nT within the magnetic field ranging from 20000nT to 100000nT is realized.

  14. Accuracy of the Fluorescence-Activated Cell Sorting Assay for the Aquaporin-4 Antibody (AQP4-Ab): Comparison with the Commercial AQP4-Ab Assay Kit

    Science.gov (United States)

    Kim, Yoo-Jin; Cheon, So Young; Kim, Boram; Jung, Kyeong Cheon; Park, Kyung Seok

    2016-01-01

    Background The aquaporin-4 antibody (AQP4-Ab) is a disease-specific autoantibody to neuromyelitis optica (NMO). We aimed to evaluate the accuracy of the FACS assay in detecting the AQP4-Ab compared with the commercial cell-based assay (C-CBA) kit. Methods Human embryonic kidney-293 cells were transfected with human aquaporin-4 (M23) cDNA. The optimal cut off values of FACS assay was tested using 1123 serum samples from patients with clinically definite NMO, those at high risk for NMO, patients with multiple sclerosis, patients with other idiopathic inflammatory demyelinating diseases, and negative controls. The accuracy of FACS assay and C-CBA were compared in consecutive 225 samples that were collected between January 2014 and June 2014. Results With a cut-off value of MFIi of 3.5 and MFIr of 2.0, the receiver operating characteristic curve for the FACS assay showed an area under the curve of 0.876. Among 225 consecutive sera, the FACS assay and C-CBA had a sensitivity of 77.3% and 69.7%, respectively, in differentiating the sera of definite NMO patients from sera of controls without IDD or of MS. Both assay had a good specificity of 100% in it. The overall positivity of the C-CBA among FACS-positive sera was 81.5%; moreover, its positivity was low as 50% among FACS-positive sera with relatively low MFIis. Conclusions Both the FACS assay and C-CBA are sensitive and highly specific assays in detecting AQP4-Ab. However, in some sera with relatively low antibody titer, FACS-assay can be a more sensitive assay option. In real practice, complementary use of FACS assay and C-CBA will benefit the diagnosis of NMO patients, because the former can be more sensitive among low titer sera and the latter are easier to use therefore can be widely used. PMID:27658059

  15. Improved Fast Fourier Transform Based Method for Code Accuracy Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The capability of the proposed method is discussed. In this study, the limitations of the FFTBM were analyzed. The FFTBM produces quantitatively different results due to its frequency dependence. Because the problem is intensified by including a lot of high frequency components, a new method using a reduced cut-off frequency was proposed. The results of the proposed method show that the shortcomings of FFTBM are considerably relieved. Among them, the fast Fourier transform based method (FFTBM) introduced in 1990 has been widely used to evaluate a code uncertainty or accuracy. Prosek et al., (2008) identified its drawbacks, the so-called 'edge effect'. To overcome the problems, an improved FFTBM by signal mirroring (FFTBM-SM) was proposed and it has been used up to now. In spite of the improvement, the FFTBM-SM yielded different accuracy depending on the frequency components of a parameter, such as pressure, temperature and mass flow rate. Therefore, it is necessary to reduce the frequency dependence of the FFTBMs. In this study, the deficiencies of the present FFTBMs are analyzed and a new method is proposed to mitigate its frequency dependence.

  16. Improving the accuracy of livestock distribution estimates through spatial interpolation

    Directory of Open Access Journals (Sweden)

    Ward Bryssinckx

    2012-11-01

    Full Text Available Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes. For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples. During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non

  17. An analytically linearized helicopter model with improved modeling accuracy

    Science.gov (United States)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  18. Accuracy improvement of geometric correction for CHRIS data

    Institute of Scientific and Technical Information of China (English)

    WANG Dian-zhong; PANG Yong; GUO Zhi-feng

    2008-01-01

    This paper deals with a new type of multi-angle remotely sensed data---CHRIS (the Compact High Resolution Imaging Spectrometer), by using rational function models (RFM) and rigorous sensor models (RSM). For ortho-rectifying an image set, a rigorous sensor model-Toutin's model was employed and a set of reported parameters including across track angle, along track angle, IFOV, altitude, period, eccentricity and orbit inclination were input, then, the orbit calculation was started and the track information was given to the raw data. The images were ortho-rectified with geocoded ASTER images and digital elevation (DEM) data. Results showed that with 16 ground control points (GCPs), the correction accuracy decreased with view zenith angle, and the RMSE value increased to be over one pixel at 36 degree off-nadir. When the GCPs were approximately chosen as in Toutin's model, a RFM with three coefficients produced the same accuracy trend versus view zenith angle while the RMSEs for all angles were improved and within about one pixel.

  19. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  20. CADASTRAL POSITIONING ACCURACY IMPROVEMENT: A CASE STUDY IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    N. M. Hashim

    2016-09-01

    Full Text Available Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM. With the growth of spatial based technology especially Geographical Information System (GIS, DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI in cadastral database modernization.

  1. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.

    Science.gov (United States)

    Mastanduno, Michael A; Gambhir, Sanjiv S

    2016-10-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.

  2. Selecting fillers on emotional appearance improves lineup identification accuracy.

    Science.gov (United States)

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  3. Improvement of SLR accuracy, a possible new step

    Science.gov (United States)

    Kasser, Michel

    1993-01-01

    The satellite laser ranging (SLR) technology experienced a large number of technical improvements since the early 1970's, leading now to a millimetric instrumental accuracy. Presently, it appears as useless to increase these instrumental performances as long as the atmospheric propagation delay suffers its actual imprecision. It has been proposed for many years to work in multiwavelength mode, but up to now the considerable technological difficulties of subpicosecond timing have seriously delayed such an approach. Then a new possibility is proposed, using a device which is not optimized now for SLR but has already given good results in the lower troposphere for wind measurement: the association of a radar and a sodar. While waiting for the 2-lambda methodology, this one could provide an atmospheric propagation delay at the millimeter level during a few years with only little technological investment.

  4. Euler Deconvolution with Improved Accuracy and Multiple Different Structural Indices

    Institute of Scientific and Technical Information of China (English)

    G R J Cooper

    2008-01-01

    Euler deconvolution is a semi-automatic interpretation method that is frequently used with magnetic and gravity data. For a given source type, which is specified by its structural index (SI), it provides an estimate of the source location. It is demonstrated here that by computing the solution space of individual data points and selecting common source locations the accuracy of the result can be improved. Furthermore, only a slight modification of the method is necessary to allow solutions for any number of different Sis to be obtained simultaneously. The method is applicable to both evenly and unevenly sampled geophysical data and is demonstrated on gravity and magnetic data. Source code (in Matlab format) is available from www.iamg.org.

  5. Accuracy Improvement for Predicting Parkinson’s Disease Progression

    Science.gov (United States)

    Nilashi, Mehrbakhsh; Ibrahim, Othman; Ahani, Ali

    2016-01-01

    Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease. PMID:27686748

  6. Navigation improves accuracy of rotational alignment in total knee arthroplasty.

    Science.gov (United States)

    Stöckl, Bernd; Nogler, Michael; Rosiek, Rafal; Fischer, Martin; Krismer, Martin; Kessler, Oliver

    2004-09-01

    Successful total knee arthroplasty is dependent on the correct alignment of implanted prostheses. Major clinical problems can be related to poor femoral component positioning, including sagittal plane and rotational malalignment. A prospective randomized study was designed to test whether an optical navigation system for total knee arthroplasty achieved greater implantation precision than a nonnavigated technique. The primary variable was rotation of the femoral component in the transverse plane, measured from postoperative radiographs and computed tomography images. Sixty-four patients were included in the study. All patients received the Duracon total knee prosthesis. The patients were randomly divided into two groups: Group C patients had conventional total knee arthroplasty without navigation; Group N patients had total knee arthroplasty using a computer-assisted knee navigation system. Analysis showed that patients in Group N had significantly better rotational alignment and flexion angle of the femoral component than patients in Group C. In addition, superior postoperative alignment of the mechanical axis, posterior tibial slope, and rotational alignment was achieved for patients in Group N. The use of a navigation system provides improved alignment accuracy, and can help to avoid femoral malrotation and errors in axial alignment.

  7. Improving the accuracy of fingerprinting system using multibiometric approach

    Directory of Open Access Journals (Sweden)

    Safa M. AL-Taie

    2016-07-01

    Full Text Available Biometric technology is a science that used to verify or identify the individual based on physical and/or behavioral traits. Although biometric systems are considered more secure than other traditional methods such as password, or key, they also have many limitations such as noisy image, or spoof attack. One of the solutions to overcome these limitations, is by applying a multibiometric system. Multibiometric system has a significant effect in improving the performance of both security and accuracy of the system. It also can alleviate the spoof attacks and reduce the fail to enroll error. A multi-sample is one implementations of the multibiometric systems. In this study, a new algorithm is suggested to provide a second chance for the genuine user who is rejected, to compare his/her provided finger with the other samples of the same finger. Multisampling fingerprint is used to implement this new algorithm. The algorithm is activated when the match score of the user is not equal to a threshold but close to it, then the system provides another chance to compare the finger with another sample of the same trait. Using multi-sample biometric system improved the performance of the system by reducing the False Reject Rate (FRR. Applying the original matching algorithm on the presented database produced 3 genuine users, and 5 imposters for the same fingerprint. While after implementing the suggested condition, the system performance is enhanced by producing 6 genuine users, and 2 imposters for the same fingerprint. This work was built and executed depending on a previous Matlab code presented by Zhi Li Wu. Thresholds and Receiver Operating Characteristic (ROC curves computed before and after implementing the suggested multibiometric algorithm. Both ROC curves compared. A final decision and recommendations are provided depending on the results obtained from this project.

  8. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  9. Research on techniques to improve accuracy in SBL locating systems

    Institute of Scientific and Technical Information of China (English)

    SHI Jie; LIU Bo-sheng; SONG Hai-yan

    2008-01-01

    Short baseline system (SBL), which is a kind of underwater acoustic locating technology, has widely applicable value. In order to examine the capability of ship model design, the ship model experimentation should have high accuracy. This paper focuses on the key techniques of high accuracy locating system, including high accuracy sub-array position emendation, divisional locating, anti multi-path interference measure, etc. Experiments show that the SBL locating systems has received the satisfying effect owing to these key techniques proposed in this paper.

  10. A Novel Navigation Robustness and Accuracy Improvement System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To address NASA's need for L1 C/A-based navigation with better anti-spoofing ability and higher accuracy, Broadata Communications, Inc. (BCI) proposes to develop a...

  11. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    Science.gov (United States)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  12. OPERATIONAL ACCURACY IMPROVEMENT OF DIGITAL SERVO SYSTEM CONTAINING UNBALANCED LOAD

    Directory of Open Access Journals (Sweden)

    A. Stryzhniou

    2013-01-01

    Full Text Available The paper considers a structural flowchart of a typical digital servo system and its operational principle. A method for determination of the unbalanced load effect on the accuracy of the system operation is described in the paper. The paper proposes a method for compensation of the  unbalanced load influence on the system operation. The experimental verification of the unbalanced load effect on the operational accuracy of the servo system and compensation of these influences have been executed in the paper.

  13. Multimodal Biometric Systems - Study to Improve Accuracy and Performance

    CERN Document Server

    Sasidhar, K; Ramakrishna, Kolikipogu; KailasaRao, K

    2010-01-01

    Biometrics is the science and technology of measuring and analyzing biological data of human body, extracting a feature set from the acquired data, and comparing this set against to the template set in the database. Experimental studies show that Unimodal biometric systems had many disadvantages regarding performance and accuracy. Multimodal biometric systems perform better than unimodal biometric systems and are popular even more complex also. We examine the accuracy and performance of multimodal biometric authentication systems using state of the art Commercial Off- The-Shelf (COTS) products. Here we discuss fingerprint and face biometric systems, decision and fusion techniques used in these systems. We also discuss their advantage over unimodal biometric systems.

  14. Improving Accuracy of Sleep Self-Reports through Correspondence Training

    Science.gov (United States)

    St. Peter, Claire C.; Montgomery-Downs, Hawley E.; Massullo, Joel P.

    2012-01-01

    Sleep insufficiency is a major public health concern, yet the accuracy of self-reported sleep measures is often poor. Self-report may be useful when direct measurement of nonverbal behavior is impossible, infeasible, or undesirable, as it may be with sleep measurement. We used feedback and positive reinforcement within a small-n multiple-baseline…

  15. On combining reference data to improve imputation accuracy.

    Directory of Open Access Journals (Sweden)

    Jun Chen

    Full Text Available Genotype imputation is an important tool in human genetics studies, which uses reference sets with known genotypes and prior knowledge on linkage disequilibrium and recombination rates to infer un-typed alleles for human genetic variations at a low cost. The reference sets used by current imputation approaches are based on HapMap data, and/or based on recently available next-generation sequencing (NGS data such as data generated by the 1000 Genomes Project. However, with different coverage and call rates for different NGS data sets, how to integrate NGS data sets of different accuracy as well as previously available reference data as references in imputation is not an easy task and has not been systematically investigated. In this study, we performed a comprehensive assessment of three strategies on using NGS data and previously available reference data in genotype imputation for both simulated data and empirical data, in order to obtain guidelines for optimal reference set construction. Briefly, we considered three strategies: strategy 1 uses one NGS data as a reference; strategy 2 imputes samples by using multiple individual data sets of different accuracy as independent references and then combines the imputed samples with samples based on the high accuracy reference selected when overlapping occurs; and strategy 3 combines multiple available data sets as a single reference after imputing each other. We used three software (MACH, IMPUTE2 and BEAGLE for assessing the performances of these three strategies. Our results show that strategy 2 and strategy 3 have higher imputation accuracy than strategy 1. Particularly, strategy 2 is the best strategy across all the conditions that we have investigated, producing the best accuracy of imputation for rare variant. Our study is helpful in guiding application of imputation methods in next generation association analyses.

  16. Assessment of neuropsychiatric symptoms in dementia: Toward improving accuracy

    Directory of Open Access Journals (Sweden)

    Florindo Stella

    Full Text Available ABSTRACT The issue of this article concerned the discussion about tools frequently used tools for assessing neuropsychiatric symptoms of patients with dementia, particularly Alzheimer's disease. The aims were to discuss the main tools for evaluating behavioral disturbances, and particularly the accuracy of the Neuropsychiatric Inventory - Clinician Rating Scale (NPI-C. The clinical approach to and diagnosis of neuropsychiatric syndromes in dementia require suitable accuracy. Advances in the recognition and early accurate diagnosis of psychopathological symptoms help guide appropriate pharmacological and non-pharmacological interventions. In addition, recommended standardized and validated measurements contribute to both scientific research and clinical practice. Emotional distress, caregiver burden, and cognitive impairment often experienced by elderly caregivers, may affect the quality of caregiver reports. The clinician rating approach helps attenuate these misinterpretations. In this scenario, the NPI-C is a promising and versatile tool for assessing neuropsychiatric syndromes in dementia, offering good accuracy and high reliability, mainly based on the diagnostic impression of the clinician. This tool can provide both strategies: a comprehensive assessment of neuropsychiatric symptoms in dementia or the investigation of specific psychopathological syndromes such as agitation, depression, anxiety, apathy, sleep disorders, and aberrant motor disorders, among others.

  17. The trade-off between accuracy and accessibility of syphilis screening assays.

    Directory of Open Access Journals (Sweden)

    Pieter W Smit

    Full Text Available The availability of rapid and sensitive methods to diagnose syphilis facilitates screening of pregnant women, which is one of the most cost-effective health interventions available. We have evaluated two screening methods in Tanzania: an enzyme immunoassay (EIA, and a point-of-care test (POCT. We evaluated the performance of each test against the Treponema pallidum particle agglutination assay (TPPA as the reference method, and the accessibility of testing in a rural district of Tanzania. The POCT was performed in the clinic on whole blood, while the other assays were performed on plasma in the laboratory. Samples were also tested by the rapid plasma Reagin (RPR test. With TPPA as reference assay, the sensitivity and specificity of EIA were 95.3% and 97.8%, and of the POCT were 59.6% and 99.4% respectively. The sensitivity of the POCT and EIA for active syphilis cases (TPPA positive and RPR titer ≥ 1/8 were 82% and 100% respectively. Only 15% of antenatal clinic attenders in this district visited a health facility with a laboratory capable of performing the EIA. Although it is less sensitive than EIA, its greater accessibility, and the fact that treatment can be given on the same day, means that the use of POCT would result in a higher proportion of women with syphilis receiving treatment than with the EIA in this district of Tanzania.

  18. The Trade-Off between Accuracy and Accessibility of Syphilis Screening Assays

    Science.gov (United States)

    Smit, Pieter W.; Mabey, David; Changalucha, John; Mngara, Julius; Clark, Benjamin; Andreasen, Aura; Todd, Jim; Urassa, Mark; Zaba, Basia; Peeling, Rosanna W.

    2013-01-01

    The availability of rapid and sensitive methods to diagnose syphilis facilitates screening of pregnant women, which is one of the most cost-effective health interventions available. We have evaluated two screening methods in Tanzania: an enzyme immunoassay (EIA), and a point-of-care test (POCT). We evaluated the performance of each test against the Treponema pallidum particle agglutination assay (TPPA) as the reference method, and the accessibility of testing in a rural district of Tanzania. The POCT was performed in the clinic on whole blood, while the other assays were performed on plasma in the laboratory. Samples were also tested by the rapid plasma Reagin (RPR) test. With TPPA as reference assay, the sensitivity and specificity of EIA were 95.3% and 97.8%, and of the POCT were 59.6% and 99.4% respectively. The sensitivity of the POCT and EIA for active syphilis cases (TPPA positive and RPR titer ≥1/8) were 82% and 100% respectively. Only 15% of antenatal clinic attenders in this district visited a health facility with a laboratory capable of performing the EIA. Although it is less sensitive than EIA, its greater accessibility, and the fact that treatment can be given on the same day, means that the use of POCT would result in a higher proportion of women with syphilis receiving treatment than with the EIA in this district of Tanzania. PMID:24066175

  19. Improvement of CD-SEM mark position measurement accuracy

    Science.gov (United States)

    Kasa, Kentaro; Fukuhara, Kazuya

    2014-04-01

    CD-SEM is now attracting attention as a tool that can accurately measure positional error of device patterns. However, the measurement accuracy can get worse due to pattern asymmetry as in the case of image based overlay (IBO) and diffraction based overlay (DBO). For IBO and DBO, a way of correcting the inaccuracy arising from measurement patterns was suggested. For CD-SEM, although a way of correcting CD bias was proposed, it has not been argued how to correct the inaccuracy arising from pattern asymmetry using CD-SEM. In this study we will propose how to quantify and correct the measurement inaccuracy affected by pattern asymmetry.

  20. Method for improving accuracy in full evaporation headspace analysis.

    Science.gov (United States)

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-03-21

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. This article is protected by copyright. All rights reserved.

  1. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  2. Diagnostic accuracy of a loop-mediated isothermal PCR assay for detection of Orientia tsutsugamushi during acute Scrub Typhus infection.

    Directory of Open Access Journals (Sweden)

    Daniel H Paris

    2011-09-01

    Full Text Available BACKGROUND: There is an urgent need to develop rapid and accurate point-of-care (POC technologies for acute scrub typhus diagnosis in low-resource, primary health care settings to guide clinical therapy. METHODOLOGY/PRINCIPAL FINDINGS: In this study we present the clinical evaluation of loop-mediated isothermal PCR assay (LAMP in the context of a prospective fever study, including 161 patients from scrub typhus-endemic Chiang Rai, northern Thailand. A robust reference comparator set comprising following 'scrub typhus infection criteria' (STIC was used: a positive cell culture isolate and/or b an admission IgM titer ≥1∶12,800 using the 'gold standard' indirect immunofluorescence assay (IFA and/or c a 4-fold rising IFA IgM titer and/or d a positive result in at least two out of three PCR assays. Compared to the STIC criteria, all PCR assays (including LAMP demonstrated high specificity ranging from 96-99%, with sensitivities varying from 40% to 56%, similar to the antibody based rapid test, which had a sensitivity of 47% and a specificity of 95%. CONCLUSIONS/SIGNIFICANCE: The diagnostic accuracy of the LAMP assay was similar to realtime and nested conventional PCR assays, but superior to the antibody-based rapid test in the early disease course. The combination of DNA- and antibody-based detection methods increased sensitivity with minimal reduction of specificity, and expanded the timeframe of adequate diagnostic coverage throughout the acute phase of scrub typhus.

  3. AN EVALUATION OF USA UNEMPLOYMENT RATE FORECASTS IN TERMS OF ACCURACY AND BIAS. EMPIRICAL METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    BRATU (SIMIONESCU MIHAELA

    2013-02-01

    Full Text Available The most accurate forecasts for USA unemployment rate on the horizon 2001-2012, according to U1 Theil’s coefficient and to multi-criteria ranking methods, were provided by International Monetary Fund (IMF, being followed by other institutions as: Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC. The multi-criteria ranking methods were applied to solve the divergence in assessing the accuracy, differences observed by computing five chosen measures of accuracy: U1 and U2 statistics of Theil, mean error, mean squared error, root mean squared error. Some strategies of improving the accuracy of the predictions provided by the four institutions, which are biased in all cases, excepting BC, were proposed. However, these methods did not generate unbiased forecasts. The predictions made by IMF and OECD for 2001-2012 can be improved by constructing combined forecasts, the INV approach and the scheme proposed by author providing the most accurate expections. The BC forecasts can be improved by smoothing the predictions using Holt-Winters method and Hodrick - Prescott filter.

  4. A Cascaded Fingerprint Quality Assessment Scheme for Improved System Accuracy

    Directory of Open Access Journals (Sweden)

    Zia Saquib

    2011-03-01

    Full Text Available Poor-quality images mostly result in spurious or missing features, which further degrade the overall performance of fingerprint recognition systems. This paper proposes a reconfigurable scheme of quality checks at two different levels: i at raw image level and ii at feature level. At first level, ellipse properties are calculated through analysis of statistical attributes of the captured raw image. At second level, the singularity points (core and delta are identified and extracted (if any. These information, as quality measures, are used in a cascaded manner to block/pass the image. This model is tested on both publicly available (Cross Match Verifier 300 sensor as well as proprietary (Lumidigm Venus V100 OEM Module sensor fingerprint databases scanned at 500 dpi. The experimental results show that this cascaded arrangement of quality barricades could correctly block poor quality images and hence elevated the overall system accuracy: with quality checks, both FNMR and FMR significantly dropped to 9.52% and 0.26% respectively for Cross Match Dataset and 2.17% and 2.16% respectively for Lumidigm Dataset.

  5. Singing Video Games May Help Improve Pitch-Matching Accuracy

    Science.gov (United States)

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  6. Image processing for improved eye-tracking accuracy

    Science.gov (United States)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  7. Does naming accuracy improve through self-monitoring of errors?

    Science.gov (United States)

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings.

  8. Improving the Accuracy of Estimation of Climate Extremes

    Science.gov (United States)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  9. SPHGal: Smoothed Particle Hydrodynamics with improved accuracy for Galaxy simulations

    CERN Document Server

    Hu, Chia-Yu; Walch, Stefanie; Moster, Benjamin P; Oser, Ludwig

    2014-01-01

    We present the smoothed-particle hydrodynamics implementation SPHGal which incorporates several recent developments into the GADGET code. This includes a pressure-entropy formulation of SPH with a Wendland kernel, a higher order estimate of velocity gradients, a modified artificial viscosity switch with a strong limiter, and artificial conduction of thermal energy. We conduct a series of idealized hydrodynamic tests and show that while the pressure-entropy formulation is ideal for resolving fluid mixing at contact discontinuities, it performs conspicuously worse when strong shocks are involved due to the large entropy discontinuities. Including artificial conduction at shocks greatly improves the results. The Kelvin-Helmholtz instability can be resolved properly and dense clouds in the blob test dissolve qualitatively in agreement with other improved SPH implementations. We further perform simulations of an isolated Milky Way like disk galaxy and find a feedback-induced instability developing if too much arti...

  10. Electrochemical gas sensors: extending the range, improving the accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Saffell, J.R. [D.H. Dawson Alphasense Ltd., Gt. Dunmow ESSEX (United Kingdom); Hitchman, M.L. [Strathclyde Univ., Glasgow (United Kingdom). Dept. of Pure and Applied Chemistry

    2001-07-01

    Electrochemistry has been used for decades to measure gas concentrations. Over time, the wet amperometric cell has dominated the industrial gas detection market, measuring oxygen, CO and H{sub 2}S inexpensively and accurately. Other gases such as SO{sub 2}, Cl{sub 2}, NO{sub x} and NH{sub 3} can be monitored with these cells as well, but the first three gases are the most commonly measured. Incremental improvement is the name of the game, and in this paper we present two new sensor improvements in amperometric gas cells: 1 Mass flow oxygen cells with output range extended from 10% - 30% oxygen to 0.5% - 95% oxygen 2 CO gas cells with much reduced hydrogen error (orig.)

  11. Thermal dynamics on the lattice with exponentially improved accuracy

    CERN Document Server

    Pawlowski, Jan

    2016-01-01

    We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies $\\omega_n=2\\pi n T$. In particular it resolves their frequency dependence between $\\omega=0$ and $\\omega_1=2\\pi T$, where the thermal physics $\\omega\\sim T$ of e.g.~transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so their unfolding is exponentially improved compared to Euclidean simulations. We demonstrate this improvement within a $0+1$-dimensional scalar field theory and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.

  12. Improving the accuracy of maternal mortality and pregnancy related death.

    Science.gov (United States)

    Schaible, Burk

    2014-01-01

    Comparing abortion-related death and pregnancy-related death remains difficult due to the limitations within the Abortion Mortality Surveillance System and the International Statistical Classification of Diseases and Related Health Problems (ICD). These methods lack a systematic and comprehensive method of collecting complete records regarding abortion outcomes in each state and fail to properly identify longitudinal cause of death related to induced abortion. This article seeks to analyze the current method of comparing abortion-related death with pregnancy-related death and provide solutions to improve data collection regarding these subjects.

  13. Diagnostic accuracy of microscopic Observation Drug Susceptibility (MODS assay for pediatric tuberculosis in Hanoi, Vietnam.

    Directory of Open Access Journals (Sweden)

    Sinh Thi Tran

    Full Text Available INTRODUCTION: Microscopic [corrected] Observation Drug Susceptibility (MODS has been shown to be an effective and rapid technique for early diagnosis of tuberculosis (TB. Thus far only a limited number of studies evaluating MODS have been performed in children and in extra-pulmonary tuberculosis. This study aims to assess relative accuracy and time to positive culture of MODS for TB diagnosis in children admitted to a general pediatric hospital in Vietnam. METHODS/PRINCIPAL FINDINGS: Specimens from children with suspected TB were tested by smear, MODS and Lowenstein-Jensen agar (LJ. 1129 samples from 705 children were analyzed, including sputum (n=59, gastric aspirate (n=775, CSF (n=148, pleural fluid (n=33, BAL (n=41, tracheal fluid (n=45, other (n=28. 113 TB cases were defined based on the "clinical diagnosis" (confirmed and probable groups as the reference standard, in which 26% (n=30 were diagnosed as extra-pulmonary TB. Analysis by patient shows that the overall sensitivity and specificity of smear, LJ and MODS against "clinical diagnosis" was 8.8% and 100%, 38.9% and 100%, 46% and 99.5% respectively with MODS significantly more sensitive than LJ culture (P=0.02. When analyzed by sample type, the sensitivity of MODS was significantly higher than LJ for gastric aspirates (P=0.004. The time to detection was also significantly shorter for MODS than LJ (7 days versus 32 days, P<0.001. CONCLUSION: MODS [corrected] is a sensitive and rapid culture technique for detecting TB in children. As MODS culture can be performed at a BSL2 facility and is inexpensive, it can therefore be recommended as a routine test for children with symptoms suggestive of TB in resource-limited settings.

  14. Antibiotic Resistome: Improving Detection and Quantification Accuracy for Comparative Metagenomics.

    Science.gov (United States)

    Elbehery, Ali H A; Aziz, Ramy K; Siam, Rania

    2016-04-01

    The unprecedented rise of life-threatening antibiotic resistance (AR), combined with the unparalleled advances in DNA sequencing of genomes and metagenomes, has pushed the need for in silico detection of the resistance potential of clinical and environmental metagenomic samples through the quantification of AR genes (i.e., genes conferring antibiotic resistance). Therefore, determining an optimal methodology to quantitatively and accurately assess AR genes in a given environment is pivotal. Here, we optimized and improved existing AR detection methodologies from metagenomic datasets to properly consider AR-generating mutations in antibiotic target genes. Through comparative metagenomic analysis of previously published AR gene abundance in three publicly available metagenomes, we illustrate how mutation-generated resistance genes are either falsely assigned or neglected, which alters the detection and quantitation of the antibiotic resistome. In addition, we inspected factors influencing the outcome of AR gene quantification using metagenome simulation experiments, and identified that genome size, AR gene length, total number of metagenomics reads and selected sequencing platforms had pronounced effects on the level of detected AR. In conclusion, our proposed improvements in the current methodologies for accurate AR detection and resistome assessment show reliable results when tested on real and simulated metagenomic datasets.

  15. Fundamentals of modern statistical methods substantially improving power and accuracy

    CERN Document Server

    Wilcox, Rand R

    2001-01-01

    Conventional statistical methods have a very serious flaw They routinely miss differences among groups or associations among variables that are detected by more modern techniques - even under very small departures from normality Hundreds of journal articles have described the reasons standard techniques can be unsatisfactory, but simple, intuitive explanations are generally unavailable Improved methods have been derived, but they are far from obvious or intuitive based on the training most researchers receive Situations arise where even highly nonsignificant results become significant when analyzed with more modern methods Without assuming any prior training in statistics, Part I of this book describes basic statistical principles from a point of view that makes their shortcomings intuitive and easy to understand The emphasis is on verbal and graphical descriptions of concepts Part II describes modern methods that address the problems covered in Part I Using data from actual studies, many examples are include...

  16. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  17. A priori estimation of accuracy and of the number of wells to be employed in limiting dilution assays

    Directory of Open Access Journals (Sweden)

    J.G. Chaui-Berlinck

    2000-08-01

    Full Text Available The use of limiting dilution assay (LDA for assessing the frequency of responders in a cell population is a method extensively used by immunologists. A series of studies addressing the statistical method of choice in an LDA have been published. However, none of these studies has addressed the point of how many wells should be employed in a given assay. The objective of this study was to demonstrate how a researcher can predict the number of wells that should be employed in order to obtain results with a given accuracy, and, therefore, to help in choosing a better experimental design to fulfill one's expectations. We present the rationale underlying the expected relative error computation based on simple binomial distributions. A series of simulated in machina experiments were performed to test the validity of the a priori computation of expected errors, thus confirming the predictions. The step-by-step procedure of the relative error estimation is given. We also discuss the constraints under which an LDA must be performed.

  18. FROM ENERGY IMPROVEMENT TO ACCURACY ENHANCEMENT:IMPROVEMENT OF PLATE BENDING ELEMENTS BY THE COMBINED HYBRID METHOD

    Institute of Scientific and Technical Information of China (English)

    Xiao-ping Xie

    2004-01-01

    By following the geometric point of view in mechanics, a novel expression of the combined hybrid method for plate bending problems is introduced to clarify its intrinsic mechanism of enhancing coarse-mesh accuracy of conforming or nonconforming plate elements.By adjusting the combination parameter α∈ (0, 1) and adopting appropriate bending moments modes, reduction of energy error for the discretized displacement model leads to enhanced numerical accuracy. As an application, improvement of Adini's rectangle is discussed. Numerical experiments show that the combined hybrid counterpart of Adini's element is capable of attaining high accuracy at coarse meshes.

  19. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  20. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers.

    Science.gov (United States)

    Thompson, Clarissa A; Opfer, John E

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children's representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  1. Click-iT assay with improved DNA distribution histograms.

    Science.gov (United States)

    Hamelik, Ronald M; Krishan, Awtar

    2009-10-01

    The Click-iT Assay developed and commercialized by Invitrogen is based on incorporation of a new 5-bromo-2'-deoxyuridine analog, 5-ethynyl-2'-deoxyuridine (EdU) into newly synthesized DNA and its recognition by azide dyes via a copper mediated "click" reaction. This relatively convenient and useful procedure depends on fixation of cells with paraformaldehyde and staining of the DNA with 7-aminoactinomycin-D (7-AAD). Both of these procedures result in DNA histograms with broad coefficients of variation (CV's). In this report, we have shown that after EdU incorporation, nuclei isolated by lysis can be incubated with the Click-iT Assay and stained with propidium iodide for generation of DNA histograms with low CV's. This modified procedure results in better DNA histograms by replacing 7-AAD with propidium iodide and also saves processing time by eliminating the fixation and permeabilization steps.

  2. Training readers to improve their accuracy in grading Crohn's disease activity on MRI

    Energy Technology Data Exchange (ETDEWEB)

    Tielbeek, Jeroen A.W.; Bipat, Shandra; Boellaard, Thierry N.; Nio, C.Y.; Stoker, Jaap [University of Amsterdam, Department of Radiology, Academic Medical Center, Amsterdam (Netherlands)

    2014-05-15

    To prospectively evaluate if training with direct feedback improves grading accuracy of inexperienced readers for Crohn's disease activity on magnetic resonance imaging (MRI). Thirty-one inexperienced readers assessed 25 cases as a baseline set. Subsequently, all readers received training and assessed 100 cases with direct feedback per case, randomly assigned to four sets of 25 cases. The cases in set 4 were identical to the baseline set. Grading accuracy, understaging, overstaging, mean reading times and confidence scores (scale 0-10) were compared between baseline and set 4, and between the four consecutive sets with feedback. Proportions of grading accuracy, understaging and overstaging per set were compared using logistic regression analyses. Mean reading times and confidence scores were compared by t-tests. Grading accuracy increased from 66 % (95 % CI, 56-74 %) at baseline to 75 % (95 % CI, 66-81 %) in set 4 (P = 0.003). Understaging decreased from 15 % (95 % CI, 9-23 %) to 7 % (95 % CI, 3-14 %) (P < 0.001). Overstaging did not change significantly (20 % vs 19 %). Mean reading time decreased from 6 min 37 s to 4 min 35 s (P < 0.001). Mean confidence increased from 6.90 to 7.65 (P < 0.001). During training, overall grading accuracy, understaging, mean reading times and confidence scores improved gradually. Inexperienced readers need training with at least 100 cases to achieve the literature reported grading accuracy of 75 %. (orig.)

  3. An improved Bradford protein assay for collagen proteins.

    Science.gov (United States)

    López, J M; Imperial, S; Valderrama, R; Navarro, S

    1993-10-29

    A modification of the protein determination method of Bradford adapted for collagen-rich samples is described. The use of Coomassie-based protein determination methods is limited by the great variation in colour yield obtained for different proteins. This is especially important in samples containing significant amounts of collagen where direct application of the methods of Lowry and Bradford results in underestimated values. Addition of small amounts of sodium dodecyl sulphate (SDS) (0.0035%) to the diluted solutions of Coomassie Brilliant Blue G used as dye reagent in the Bradford colorimetric assay caused a 4-fold increase in the colour response of three collagen proteins (Col I, III and IV) and a decrease in absorbance for various non-collagen proteins. The presence of SDS in the reagent did not result in a significant metachromatic shift of the collagen-dye complexes. This simple modification in the preparation of the reagent for the Bradford assay allows similar response curves to be obtained for collagen and non-collagen proteins, making the modified assay of potential use for protein determination in collagen-rich samples such as pancreatic extracts.

  4. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    Science.gov (United States)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  5. Qualification of standard membrane-feeding assay with Plasmodium falciparum malaria and potential improvements for future assays.

    Directory of Open Access Journals (Sweden)

    Kazutoyo Miura

    Full Text Available Vaccines that interrupt malaria transmission are of increasing interest and a robust functional assay to measure this activity would promote their development by providing a biologically relevant means of evaluating potential vaccine candidates. Therefore, we aimed to qualify the standard membrane-feeding assay (SMFA. The assay measures the transmission-blocking activity of antibodies by feeding cultured P. falciparum gametocytes to Anopheles mosquitoes in the presence of the test antibodies and measuring subsequent mosquito infection. The International Conference on Harmonisation (ICH Harmonised Tripartite Guideline Q2(R1 details characteristics considered in assay validation. Of these characteristics, we decided to qualify the SMFA for Precision, Linearity, Range and Specificity. The transmission-blocking 4B7 monoclonal antibody was tested over 6 feeding experiments at several concentrations to determine four suitable concentrations that were tested in triplicate in the qualification experiments (3 additional feeds to evaluate Precision, Linearity and Range. For Specificity, 4B7 was tested in the presence of normal mouse IgG. We determined intra- and inter-assay variability of % inhibition of mean oocyst intensity at each concentration of 4B7 (lower concentrations showed higher variability. We also showed that % inhibition was dependent on 4B7 concentration and the activity is specific to 4B7. Since obtaining empirical data is time-consuming, we generated a model using data from all 9 feeds and simulated the effects of different parameters on final readouts to improve the assay procedure and analytical methods for future studies. For example, we estimated the effect of number of mosquitoes dissected on variability of % inhibition, and simulated the relationship between % inhibition in oocyst intensity and % inhibition of prevalence of infected mosquitos at different mean oocysts in the control. SMFA is one of the few biological assays used in

  6. Improvement of Accuracy in Damage Localization Using Frequency Slice Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Xinglong Liu

    2012-01-01

    Full Text Available Damage localization is a primary objective of damage identification. This paper presents damage localization in beam structure using impact-induced Lamb wave and Frequency Slice Wavelet Transform (FSWT. FSWT is a new time-frequency analysis method and has the adaptive resolution feature. The time-frequency resolution is a vital factor affecting the accuracy of damage localization. In FSWT there is a unique parameter controlling the time-frequency resolution. To improve the accuracy of damage localization, a generalized criterion is proposed to determine the parameter value for achieving a suitable time-frequency resolution. For damage localization, the group velocity dispersion curve (GVDC of A0 Lamb waves in beam is first accurately estimated using FSWT, and then the arrival times of reflection wave from the crack for some individual frequency components are determined. An average operation on the calculated propagation distance is then performed to further improve the accuracy of damage localization.

  7. The contribution of educational class in improving accuracy of cardiovascular risk prediction across European regions

    DEFF Research Database (Denmark)

    Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E

    2014-01-01

    OBJECTIVE: To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. METHODS: In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47...

  8. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by cons

  9. Effects of Improved-floor Function on the Accuracy of Bilinear Interpolation Algorithm

    NARCIS (Netherlands)

    Rukundo, Olivier

    2015-01-01

    In this study, the standard IEEE 754–2008 and modulo-based floor functions for rounding non-integers have been presented. Their effects on the accuracy of the bilinear interpolation algorithm have been demonstrated. The improved-floor uses the modulo operator in an effort to make each non-integer ad

  10. New polymorphic tetranucleotide microsatellites improve scoring accuracy in the bottlenose dolphin Tursiops aduncus

    NARCIS (Netherlands)

    Nater, Alexander; Kopps, Anna M.; Kruetzen, Michael

    2009-01-01

    We isolated and characterized 19 novel tetranucleotide microsatellite markers in the Indo-Pacific bottlenose dolphin (Tursiops aduncus) in order to improve genotyping accuracy in applications like large-scale population-wide paternity and relatedness assessments. One hundred T. aduncus from Shark Ba

  11. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    Science.gov (United States)

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  12. Operational amplifier speed and accuracy improvement analog circuit design with structural methodology

    CERN Document Server

    Ivanov, Vadim V

    2004-01-01

    Operational Amplifier Speed and Accuracy Improvement proposes a new methodology for the design of analog integrated circuits. The usefulness of this methodology is demonstrated through the design of an operational amplifier. This methodology consists of the following iterative steps: description of the circuit functionality at a high level of abstraction using signal flow graphs; equivalent transformations and modifications of the graph to the form where all important parameters are controlled by dedicated feedback loops; and implementation of the structure using a library of elementary cells. Operational Amplifier Speed and Accuracy Improvement shows how to choose structures and design circuits which improve an operational amplifier's important parameters such as speed to power ratio, open loop gain, common-mode voltage rejection ratio, and power supply rejection ratio. The same approach is used to design clamps and limiting circuits which improve the performance of the amplifier outside of its linear operat...

  13. Incorporating the effect of DEM resolution and accuracy for improved flood inundation mapping

    Science.gov (United States)

    Saksena, Siddharth; Merwade, Venkatesh

    2015-11-01

    Topography plays a major role in determining the accuracy of flood inundation areas. However, many areas in the United States and around the world do not have access to high quality topographic data in the form of Digital Elevation Models (DEM). For such areas, an improved understanding of the effects of DEM properties such as horizontal resolution and vertical accuracy on flood inundation maps may eventually lead to improved flood inundation modeling and mapping. This study attempts to relate the errors arising from DEM properties such as spatial resolution and vertical accuracy to flood inundation maps, and then use this relationship to create improved flood inundation maps from coarser resolution DEMs with low accuracy. The results from the five stream reaches used in this study show that water surface elevations (WSE) along the stream and the flood inundation area have a linear relationship with both DEM resolution and accuracy. This linear relationship is then used to extrapolate the water surface elevations from coarser resolution DEMs to get water surface elevations corresponding to a finer resolution DEM. Application of this approach show that improved results can be obtained from flood modeling by using coarser and less accurate DEMs, including public domain datasets such as the National Elevation Dataset and Shuttle Radar Topography Mission (SRTM) DEMs. The improvement in the WSE and its application to obtain better flood inundation maps is dependent on the study reach characteristics such as land use, valley shape, reach length and width. Application of the approach presented in this study on more reaches may lead to development of guidelines for flood inundation mapping using coarser resolution and less accurate topographic datasets.

  14. How to Improve Reading Accuracy by Strategic Leading-in and Guidance

    Institute of Scientific and Technical Information of China (English)

    邝艳平

    2014-01-01

    The present study presents a detailed report of a project implemented to solve the problem that most of my students’ reading comprehension accuracy is low. It is hypothesized that learners' reading comprehension accuracy can be improved by strategic leading-in and guidance. This hypothesis is verified by a three-week classroom teaching of strategic leading-in and guidance in pre-reading stage. Among the methods of scientific investigation used are analytic method, cause analysis, questionnaire and brainstorming activa-tion.

  15. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    Science.gov (United States)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  16. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    Science.gov (United States)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  17. Evaluation of the diagnostic accuracy of a new dengue IgA capture assay (Platelia Dengue IgA Capture, Bio-Rad for dengue infection detection.

    Directory of Open Access Journals (Sweden)

    Sophie De Decker

    2015-03-01

    Full Text Available Considering the short lifetime of IgA antibodies in serum and the key advantages of antibody detection ELISAs in terms of sensitivity and specificity, Bio-Rad has just developed a new ELISA test based on the detection of specific anti-dengue IgA. This study has been carried out to assess the performance of this Platelia Dengue IgA Capture assay for dengue infection detection. A total of 184 well-characterized samples provided by the French Guiana NRC sera collection (Laboratory of Virology, Institut Pasteur in French Guiana were selected among samples collected between 2002 and 2013 from patients exhibiting a dengue-like syndrome. A first group included 134 sera from confirmed dengue-infected patients, and a second included 50 sera from non-dengue infected patients, all collected between day 3 and day 15 after the onset of fever. Dengue infection diagnoses were all confirmed using reference assays by direct virological identification using RT-PCR or virus culture on acute sera samples or on paired acute-phase sera samples of selected convalescent sera. This study revealed: i a good overall sensitivity and specificity of the IgA index test, i.e., 93% and 88% respectively, indicating its good correlation to acute dengue diagnosis; and ii a good concordance with the Panbio IgM capture ELISA. Because of the shorter persistence of dengue virus-specific IgA than IgM, these results underlined the relevance of this new test, which could significantly improve dengue diagnosis accuracy, especially in countries where dengue virus is (hyper- endemic. It would allow for additional refinement of dengue diagnostic strategy.

  18. Evaluation of the diagnostic accuracy of a new dengue IgA capture assay (Platelia Dengue IgA Capture, Bio-Rad) for dengue infection detection.

    Science.gov (United States)

    De Decker, Sophie; Vray, Muriel; Sistek, Viridiana; Labeau, Bhety; Enfissi, Antoine; Rousset, Dominique; Matheus, Séverine

    2015-03-01

    Considering the short lifetime of IgA antibodies in serum and the key advantages of antibody detection ELISAs in terms of sensitivity and specificity, Bio-Rad has just developed a new ELISA test based on the detection of specific anti-dengue IgA. This study has been carried out to assess the performance of this Platelia Dengue IgA Capture assay for dengue infection detection. A total of 184 well-characterized samples provided by the French Guiana NRC sera collection (Laboratory of Virology, Institut Pasteur in French Guiana) were selected among samples collected between 2002 and 2013 from patients exhibiting a dengue-like syndrome. A first group included 134 sera from confirmed dengue-infected patients, and a second included 50 sera from non-dengue infected patients, all collected between day 3 and day 15 after the onset of fever. Dengue infection diagnoses were all confirmed using reference assays by direct virological identification using RT-PCR or virus culture on acute sera samples or on paired acute-phase sera samples of selected convalescent sera. This study revealed: i) a good overall sensitivity and specificity of the IgA index test, i.e., 93% and 88% respectively, indicating its good correlation to acute dengue diagnosis; and ii) a good concordance with the Panbio IgM capture ELISA. Because of the shorter persistence of dengue virus-specific IgA than IgM, these results underlined the relevance of this new test, which could significantly improve dengue diagnosis accuracy, especially in countries where dengue virus is (hyper-) endemic. It would allow for additional refinement of dengue diagnostic strategy.

  19. Developing an efficient technique for satellite image denoising and resolution enhancement for improving classification accuracy

    Science.gov (United States)

    Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja

    2013-01-01

    Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.

  20. IMPROVE THE ZY-3 HEIGHT ACCURACY USING ICESAT/GLAS LASER ALTIMETER DATA

    Directory of Open Access Journals (Sweden)

    G. Li

    2016-06-01

    Full Text Available ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points, use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System loaded on the ICESat (Ice Cloud and land Elevation Satellite, which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What’s more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  1. Improve the ZY-3 Height Accuracy Using Icesat/glas Laser Altimeter Data

    Science.gov (United States)

    Li, Guoyuan; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-06-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What's more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  2. Contour accuracy improvement of a flexure-based micro-motion stage for tracking repetitive trajectory

    Science.gov (United States)

    Jia, Shi; Jiang, Yao; Li, Tiemin; Du, Yunsong

    2017-01-01

    Flexure-based micro-motion mechanisms have been widely utilized in modern precision industry due to their inherent merits, while model uncertainty, uncertain nonlinearity, and cross-coupling effect will obviously deteriorate their contour accuracy, especially in the high-speed application. This paper aims at improving the contouring performance of a flexure-based micro-motion stage utilized for tracking repetitive trajectories. The dynamic characteristic of the micro-motion stage is first studied and modeled as a second-order system, which is identified through an open-loop sinusoidal sweeping test. Then the iterative learning control (ILC) scheme is utilized to improve the tracking performance of individual axis of the stage. A nonlinear cross-coupled iterative learning control (CCILC) scheme is proposed to reduce the coupling effect among each axis, and thus improves contour accuracy of the stage. The nonlinear gain function incorporated into the CCILC controller can effectively avoid amplifying the non-recurring disturbances and noises in the iterations, which can further improve the stage's contour accuracy in high-speed motion. Comparative experiments between traditional PID, ILC, ILC & CCILC, and the proposed ILC & nonlinear CCILC are carried out on the micro-motion stage to track circular and square trajectories. The results demonstrate that the proposed control scheme outperforms other control schemes much in improving the stage's contour accuracy in high-speed motion. The study in this paper provides a practically effective technique for the flexure-based micro-motion stage in high-speed contouring motion.

  3. Improving the accuracy and precision of cognitive testing in mild dementia.

    Science.gov (United States)

    Wouters, Hans; Appels, Bregje; van der Flier, Wiesje M; van Campen, Jos; Klein, Martin; Zwinderman, Aeilko H; Schmand, Ben; van Gool, Willem A; Scheltens, Philip; Lindeboom, Robert

    2012-03-01

    The CAMCOG, ADAS-cog, and MMSE, designed to grade global cognitive ability in dementia have inadequate precision and accuracy in distinguishing mild dementia from normal ageing. Adding neuropsychological tests to their scale might improve precision and accuracy in mild dementia. We, therefore, pooled neuropsychological test-batteries from two memory clinics (ns = 135 and 186) with CAMCOG data from a population study and 2 memory clinics (n = 829) and ADAS-cog data from 3 randomized controlled trials (n = 713) to estimate a common dimension of global cognitive ability using Rasch analysis. Item difficulties and individuals' global cognitive ability levels were estimated. Difficulties of 57 items (of 64) could be validly estimated. Neuropsychological tests were more difficult than the CAMCOG, ADAS-cog, and MMSE items. Most neuropsychological tests had difficulties in the ability range of normal ageing to mild dementia. Higher than average ability levels were more precisely measured when neuropsychological tests were added to the MMSE than when these were measured with the MMSE alone. Diagnostic accuracy in mild dementia was consistently better after adding neuropsychological tests to the MMSE. We conclude that extending dementia specific instruments with neuropsychological tests improves measurement precision and accuracy of cognitive impairment in mild dementia.

  4. Improved microbial screning assay for the detection of quinolone residues in poultry and eggs

    NARCIS (Netherlands)

    Pikkemaat, M.G.; Mulder, P.P.J.; Elferink, J.W.A.; Cocq, A.; Nielen, M.W.F.; Egmond, van H.J.

    2007-01-01

    An improved microbiological screening assay is reported for the detection of quinolone residues in poultry muscle and eggs. The method was validated using fortified tissue samples and is the first microbial assay to effectively detect enrofloxacin, difloxacin, danofloxacin, as well as flumequine and

  5. Improving the accuracy of heart disease diagnosis with an augmented back propagation algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A multilayer perceptron neural network system is established to support the diagnosis for five most common heart diseases (coronary heart disease,rheumatic valvular heart disease, hypertension, chronic cor pulmonale and congenital heart disease). Momentum term, adaptive learning rate, the forgetting mechanics, and conjugate gradients method are introduced to improve the basic BP algorithm aiming to speed up the convergence of the BP algorithm and enhance the accuracy for diagnosis.A heart disease database consisting of 352 samples is applied to the training and testing courses of the system. The performance of the system is assessed by cross-validation method. It is found that as the basic BP algorithm is improved step by step, the convergence speed and the classification accuracy of the network are enhanced, and the system has great application prospect in supporting heart diseases diagnosis.

  6. A method for improving the accuracy of automatic indexing of Chinese-English mixed documents

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHAO; Hui; SHI

    2012-01-01

    Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.

  7. Improving the accuracy of image-based forest fire recognition and spatial positioning

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Forest fires are frequent natural disasters.It is necessary to explore advanced means to monitor,recognize and locate forest fires so as to establish a scientific system for the early detection,real-time positioning and quick fighting of forest fires.This paper mainly expounds methods and algorithms for improving accuracy and removing uncertainty in image-based forest fire recognition and spatial positioning.Firstly,we discuss a method of forest fire recognition in visible-light imagery.There are four aspects to improve accuracy and remove uncertainty in fire recognition:(1)eliminating factors of interference such as road and sky with high brightness,red leaves,other colored objects and objects that are lit up at night,(2)excluding imaging for specific periods and azimuth angles for which interference phenomena repeatedly occur,(3)improving the thresholding method for determining the flame border in image processing by adjusting the threshold to the season,weather and region,and (4)integrating the visible-light image method with infrared image technology.Secondly,we examine infrared-image-based methods and approaches of improving the accuracy of forest fire recognition by combining the spectrum threshold with an object feature value such as the normalized difference vegetation index and excluding factors of disturbance such as interference signals,extreme weather and high-temperature animals.Thirdly,a method of visible analysis to enhance the accuracy of forest fire positioning is examined and realized;the method includes decreasing the visual angle,selecting central points,selecting the largest spots,and judging the selection of fire spots according to the central distance.Case studies are examined and the results are found to be satisfactory.

  8. Classification of features selected through Optimum Index Factor (OIF)for improving classification accuracy

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kaushal

    2011-01-01

    The present investigation was performed to determine if the features selected through Optimum Index Factor (OIF) could provide improved classification accuracy of the various categories on the satellite images of the individual years as well as stacked images of two different years as compared to all the features considered together. Further, in order to determine if there occurs increase in the classification accuracy of the different categories with corresponding increase in the OIF values of the features extracted from both the individual years' and stacked images, we performed linear regression between the producer's accuracy (PA) of the various categories with the OIF values of the different combinations of the features. The investigations demonstrated that there occurs significant improvement in the PA of two impervious categories viz. moderate built-up and low density built-up determined from the classification of the bands and principal components associated with the highest OIF value as compared to all the bands and principal components for both the individual years' and stacked images respectively. Regression analyses exhibited positive trends between the regression coefficients and OIF values forthe various categories determined for the individual years' and stacked images respectively signifying the prevalence of direct relationship between the increase in the information content with corresponding increase in the OIF values. The research proved that features extracted through OIF from both the individual years' and stacked images are capable of providing significantly improved PA as compared to all the features pooled together.

  9. Improved localization accuracy in stochastic super-resolution fluorescence microscopy by K-factor image deshadowing.

    Science.gov (United States)

    Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G; Menon, Rajesh; Gerton, Jordan M; Jorgensen, Erik M; Zalevsky, Zeev

    2013-12-16

    Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution.

  10. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  11. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  12. Improving decision speed, accuracy and group cohesion through early information gathering in house-hunting ants.

    Directory of Open Access Journals (Sweden)

    Nathalie Stroeymeyt

    Full Text Available BACKGROUND: Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. METHODOLOGY/PRINCIPAL FINDINGS: Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. CONCLUSIONS/SIGNIFICANCE: These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade

  13. Interactive dedicated training curriculum improves accuracy in the interpretation of MR imaging of prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Akin, Oguz; Zhang, Jingbo; Hricak, Hedvig [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Riedl, Christopher C. [Memorial Sloan-Kettering Cancer Center, Department of Radiology, New York, NY (United States); Medical University of Vienna, Department of Radiology, Vienna (Austria); Ishill, Nicole M.; Moskowitz, Chaya S. [Memorial Sloan-Kettering Cancer Center, Epidemiology and Biostatistics, New York, NY (United States)

    2010-04-15

    To assess the effect of interactive dedicated training on radiology fellows' accuracy in assessing prostate cancer on MRI. Eleven radiology fellows, blinded to clinical and pathological data, independently interpreted preoperative prostate MRI studies, scoring the likelihood of tumour in the peripheral and transition zones and extracapsular extension. Each fellow interpreted 15 studies before dedicated training (to supply baseline interpretation accuracy) and 200 studies (10/week) after attending didactic lectures. Expert radiologists led weekly interactive tutorials comparing fellows' interpretations to pathological tumour maps. To assess interpretation accuracy, receiver operating characteristic (ROC) analysis was conducted, using pathological findings as the reference standard. In identifying peripheral zone tumour, fellows' average area under the ROC curve (AUC) increased from 0.52 to 0.66 (after didactic lectures; p < 0.0001) and remained at 0.66 (end of training; p < 0.0001); in the transition zone, their average AUC increased from 0.49 to 0.64 (after didactic lectures; p = 0.01) and to 0.68 (end of training; p = 0.001). In detecting extracapsular extension, their average AUC increased from 0.50 to 0.67 (after didactic lectures; p = 0.003) and to 0.81 (end of training; p < 0.0001). Interactive dedicated training significantly improved accuracy in tumour localization and especially in detecting extracapsular extension on prostate MRI. (orig.)

  14. Improving time to optimal Staphylococcus aureus treatment using a penicillin-binding protein 2a assay.

    Science.gov (United States)

    Rao, Sonia N; Wang, Sheila K; Gonzalez Zamora, Jose; Hanson, Amy P; Polisetty, Radhika S; Singh, Kamaljit

    2016-12-01

    The penicillin-binding protein 2a (PBP2a) assay is a quick, accurate and inexpensive test for determining methicillin susceptibility in Staphylococcus aureus. A pre-post-study design was conducted using a PBP2a assay with and without the impact of an antimicrobial stewardship intervention to improve time to optimal therapy for methicillin-susceptible and methicillin-resistant S. aureus isolates. Our results demonstrate significantly improved time to optimal therapy and support the use of a PBP2a assay as part of an programme for all healthcare facilities, especially those with limited resources.

  15. Improvement in the accuracy of respiratory-gated radiation therapy using a respiratory guiding system

    Science.gov (United States)

    Kang, Seong-Hee; Kim, Dong-Su; Kim, Tae-Ho; Suh, Tae-Suk; Yoon, Jai-Woong

    2013-01-01

    The accuracy of respiratory-gated radiation therapy (RGRT) depends on the respiratory regularity because external respiratory signals are used for gating the radiation beam at particular phases. Many studies have applied a respiratory guiding system to improve the respiratory regularity. This study aims to evaluate the effect of an in-house-developed respiratory guiding system to improve the respiratory regularity for RGRT. To verify the effectiveness of this system, we acquired respiratory signals from five volunteers. The improvement in respiratory regularity was analyzed by comparing the standard deviations of the amplitudes and the periods between free and guided breathing. The reduction in residual motion at each phase was analyzed by comparing the standard deviations of sorted data within each corresponding phase bin as obtained from free and guided breathing. The results indicate that the respiratory guiding system improves the respiratory regularity, and that most of the volunteers showed significantly less average residual motion at each phase. The average residual motion measured at phases of 40, 50, and 60%, which showed lower variation than other phases, were, respectively, reduced by 41, 45, and 44% during guided breathing. The results show that the accuracy of RGRT can be improved by using the in-house-developed respiratory guiding system. Furthermore, this system should reduce artifacts caused by respiratory motion in 4D CT imaging.

  16. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  17. A Feasible Approach for Improving Accuracy of Ground Deformation Measured by D-InSAR

    Institute of Scientific and Technical Information of China (English)

    CHANG Zhan-qiang; GONG Hui-li; ZHANG Jing-fa; GONG Li-xia

    2007-01-01

    D-InSAR is currently one of the most popular research tools in the field of Microwave Remote Sensing. It is unrivaled in its aspect of measuring ground deformation due to its advantages such as high resolution, continuous spatial-coverage and dynamics. However, there are still a few major problems to be solved urgently as a result of the intrinsic complexity of this technique. One of the problems deals with improving the accuracy of measured ground deformation. In this paper, various factors affecting the accuracy of ground deformation measured by D-InSAR are systematically analyzed and investigated by means of the law of measurement error propagation. At the same time, we prove that the ground deformation error not only depends on the errors of perpendicular baselines as well as the errors of the interferometric phase for topographic pair and differential pair, but also on the combination of the relationship of perpendicular baselines for topographic pairs and differential pairs. Furthermore, a feasible approach for improving the accuracy of measured ground deformation is proposed, which is of positive significance in the practical application of D-InSAR.

  18. Error analysis to improve the speech recognition accuracy on Telugu language

    Indian Academy of Sciences (India)

    N Usha Rani; P N Girija

    2012-12-01

    Speech is one of the most important communication channels among the people. Speech Recognition occupies a prominent place in communication between the humans and machine. Several factors affect the accuracy of the speech recognition system. Much effort was involved to increase the accuracy of the speech recognition system, still erroneous output is generating in current speech recognition systems. Telugu language is one of the most widely spoken south Indian languages. In the proposed Telugu speech recognition system, errors obtained from decoder are analysed to improve the performance of the speech recognition system. Static pronunciation dictionary plays a key role in the speech recognition accuracy. Modification should be performed in the dictionary, which is used in the decoder of the speech recognition system. This modification reduces the number of the confusion pairs which improves the performance of the speech recognition system. Language model scores are also varied with this modification. Hit rate is considerably increased during this modification and false alarms have been changing during the modification of the pronunciation dictionary. Variations are observed in different error measures such as F-measures, error-rate and Word Error Rate (WER) by application of the proposed method.

  19. Improving the Prediction Accuracy of Multicriteria Collaborative Filtering by Combination Algorithms

    Directory of Open Access Journals (Sweden)

    Wiranto

    2014-05-01

    Full Text Available This study focuses on developing the multicriteria collaborative filtering algorithmfor improving the prediction accuracy. The approaches applied were user-item multirating matrix decomposition, the measurement of user similarity using cosine formula and multidimensional distance, individual criteria weight calculation, and rating prediction for the overall criteria by a combination approach. Results of the study show variation in multicriteria collaborative filtering algorithm, which was used for improving the document recommender system with the two following characteristics. First, the rating prediction for four individual criteria using collaborative filtering algorithm by a cosine-based user similarity and a multidimensional distance-based user similarity. Second, the rating prediction for the overall criteria using a combination algorithms. Based on the results of testing, it can be concluded that a variety of models developed for the multicriteria collaborative filtering systems had much better prediction accuracy than for the classic collaborative filtering, which was characterized by the increasingly smaller values of Mean Absolute Error. The best accuracy was achieved by the multicriteria collaborative filtering system with multidimensional distance-based similarity.

  20. Lens distortion elimination for improving measurement accuracy of fringe projection profilometry

    Science.gov (United States)

    Li, Kai; Bu, Jingjie; Zhang, Dongsheng

    2016-10-01

    Fringe projection profilometry (FPP) is a powerful method for three-dimensional (3D) shape measurement. However, the measurement accuracy of the existing FPP is often hindered by the distortion of the lens used in FPP. In this paper, a simple and efficient method is presented to overcome this problem. First, the FPP system is calibrated as a stereovision system. Then, the camera lens distortion is eliminated by correcting the captured images. For the projector lens distortion, distorted fringe patterns are generated according to the lens distortion model. With these distorted fringe patterns, the projector can project undistorted fringe patterns, which means that the projector lens distortion is eliminated. Experimental results show that the proposed method can successfully eliminate the lens distortions of FPP and therefore improves its measurement accuracy.

  1. A New Approach to Improve Accuracy of Grey Model GMC(1,n in Time Series Prediction

    Directory of Open Access Journals (Sweden)

    Sompop Moonchai

    2015-01-01

    Full Text Available This paper presents a modified grey model GMC(1,n for use in systems that involve one dependent system behavior and n-1 relative factors. The proposed model was developed from the conventional GMC(1,n model in order to improve its prediction accuracy by modifying the formula for calculating the background value, the system of parameter estimation, and the model prediction equation. The modified GMC(1,n model was verified by two cases: the study of forecasting CO2 emission in Thailand and forecasting electricity consumption in Thailand. The results demonstrated that the modified GMC(1,n model was able to achieve higher fitting and prediction accuracy compared with the conventional GMC(1,n and D-GMC(1,n models.

  2. An Improvement of Positional Accuracy for View-Based Navigation Using SURF

    Science.gov (United States)

    Hagiwara, Yoshinobu; Imamura, Hiroki; Choi, Yongwoon; Watanabe, Kazuhiro

    In this paper, we propose a reliable method for view-based navigation of mobile robots fully improved in positional accuracy by using feature-points extracted by SURF, and it is verified from the navigation experiments of them. View-based navigations that have used block matching method are not enough in positional accuracy for robots to avoid obstacles and pass narrow doorways. By applying SURF that is stable to illumination and scale changes in an image to the method for view-based navigation, the navigation for robots becomes more robust to variable indoor conditions. In experiments conducted in an indoor corridor with a robot for comparing the proposed method to conventional one, the positional precision was obtained in centimeter-order of within 10.0[cm]. In this view, it suggests that the proposed method is applied to the view-based navigation for robots in such narrow areas as obstacle avoidance and corridors.

  3. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    Science.gov (United States)

    Leimkuhler, Benedict; Shang, Xiaocheng

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.

  4. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    CERN Document Server

    Leimkuhler, Benedict

    2016-01-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nos\\'{e}--Hoover--Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees--Edwards boundary conditions to...

  5. Diagnostic accuracy and comparison of two assays for Borrelia-specific IgG and IgM antibodies

    DEFF Research Database (Denmark)

    Dessau, Ram

    2013-01-01

    Two assays (Liaison, Diasorin; IDEIA, Oxoid) for detection of Borrelia-specific antibodies were compared. A case-control design using patients with neuroborreliosis (n = 48), laboratory defined by a positive Borrelia-specific antibody index in the spinal fluid, was available and was intended...

  6. Improved precision and accuracy for microarrays using updated probe set definitions

    Directory of Open Access Journals (Sweden)

    Larsson Ola

    2007-02-01

    Full Text Available Abstract Background Microarrays enable high throughput detection of transcript expression levels. Different investigators have recently introduced updated probe set definitions to more accurately map probes to our current knowledge of genes and transcripts. Results We demonstrate that updated probe set definitions provide both better precision and accuracy in probe set estimates compared to the original Affymetrix definitions. We show that the improved precision mainly depends on the increased number of probes that are integrated into each probe set, but we also demonstrate an improvement when the same number of probes is used. Conclusion Updated probe set definitions does not only offer expression levels that are more accurately associated to genes and transcripts but also improvements in the estimated transcript expression levels. These results give support for the use of updated probe set definitions for analysis and meta-analysis of microarray data.

  7. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  8. 3D multicolor super-resolution imaging offers improved accuracy in neuron tracing.

    Directory of Open Access Journals (Sweden)

    Melike Lakadamyali

    Full Text Available The connectivity among neurons holds the key to understanding brain function. Mapping neural connectivity in brain circuits requires imaging techniques with high spatial resolution to facilitate neuron tracing and high molecular specificity to mark different cellular and molecular populations. Here, we tested a three-dimensional (3D, multicolor super-resolution imaging method, stochastic optical reconstruction microscopy (STORM, for tracing neural connectivity using cultured hippocampal neurons obtained from wild-type neonatal rat embryos as a model system. Using a membrane specific labeling approach that improves labeling density compared to cytoplasmic labeling, we imaged neural processes at 44 nm 2D and 116 nm 3D resolution as determined by considering both the localization precision of the fluorescent probes and the Nyquist criterion based on label density. Comparison with confocal images showed that, with the currently achieved resolution, we could distinguish and trace substantially more neuronal processes in the super-resolution images. The accuracy of tracing was further improved by using multicolor super-resolution imaging. The resolution obtained here was largely limited by the label density and not by the localization precision of the fluorescent probes. Therefore, higher image resolution, and thus higher tracing accuracy, can in principle be achieved by further improving the label density.

  9. Improving the accuracy of GRACE Earth's gravitational field using the combination of different inclinations

    Institute of Scientific and Technical Information of China (English)

    Wei Zheng; Chenggang Shao; Jun Luo; Houze Xu

    2008-01-01

    In this paper,the GRACE Earth's gravitational field complete up to degree and order 120 is recovered based on the combination of different inclinations using the energy conservation principle.The results show that because different inclinations of satellite are sensitive to the geopotential coefficients with different degrees/and orders m.the design of GRACE exploiting 89° inclination can effectively improve the accuracy of geopotential zonal harmonic coefficients.However,it is less sensitive to the geopotential tesseral harmonic coefficients.Accordingly.the second group of GRACE exploiting lower inclination is required to determine high-accurately the geopotential tesseral harmonic coefficients and cover the shortage of the single group of GRACE exploiting 89° inclination.Two groups of GRACE individually exploiting 89°+(82°-84°)inclinations are the optimal combination of the Earth'S gravitational field recovery complete up to degree and order 120.In the degree 120,the joint accuracy of cumulative geoid height based on two groups of GRACE individually exploiting 89° and 83° inclinations is averagely two times higher than the accuracy of a group of GRACE exploiting 89° inclination.

  10. Integration of INS, GPS, Magnetometer and Barometer for Improving Accuracy Navigation of the Vehicle

    Directory of Open Access Journals (Sweden)

    Vlada Sokol Sokolovic

    2013-09-01

    Full Text Available This paper describes integrated navigation system that is based on a low cost inertial sensor, global positioning system (GPS receiver, magnetometer and a barometer, in order to improve accuracy of complete attitude and navigation solution. The main advantage of integration consists in availability of reliable navigation parameters during the intervals of absence of GPS data. The magnetometer and the barometer are applied for the attitude calibration and vertical channel stabilization, respectively. The acceptable accuracy of inertial navigation system (INS is achieved by the proper damping of INS errors. The integration is made by the implementation of an extended Kalman filter (EKF with control signal that is designed appropriate for low accuracy sensors noise characteristics. The analysis of integrated navigation system performances is made experimentally and the results show that integrated navigation system provides continuous and reliable navigation solutions.Defence Science Journal, 2013, 63(5, pp.451-455, DOI:http://dx.doi.org/10.14429/dsj.63.4534

  11. Accounting for filter bandwidth improves the quantitative accuracy of bioluminescence tomography

    Science.gov (United States)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie L.; Cobbold, Mark; Dehghani, Hamid

    2015-09-01

    Bioluminescence imaging is a noninvasive technique whereby surface weighted images of luminescent probes within animals are used to characterize cell count and function. Traditionally, data are collected over the entire emission spectrum of the source using no filters and are used to evaluate cell count/function over the entire spectrum. Alternatively, multispectral data over several wavelengths can be incorporated to perform tomographic reconstruction of source location and intensity. However, bandpass filters used for multispectral data acquisition have a specific bandwidth, which is ignored in the reconstruction. In this work, ignoring the bandwidth is shown to introduce a dependence of the recovered source intensity on the bandwidth of the filters. A method of accounting for the bandwidth of filters used during multispectral data acquisition is presented and its efficacy in increasing the quantitative accuracy of bioluminescence tomography is demonstrated through simulation and experiment. It is demonstrated that while using filters with a large bandwidth can dramatically decrease the data acquisition time, if not accounted for, errors of up to 200% in quantitative accuracy are introduced in two-dimensional planar imaging, even after normalization. For tomographic imaging, the use of this method to account for filter bandwidth dramatically improves the quantitative accuracy.

  12. Improving the accuracy of multiple integral evaluation by applying Romberg's method

    Science.gov (United States)

    Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.

    2009-02-01

    Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.

  13. Method of Improving the Navigation Accuracy of SINS by Continuous Rotation

    Institute of Scientific and Technical Information of China (English)

    YANG Yong; MIAO Ling-juan; SHEN Jun

    2005-01-01

    A method of improving the navigation accuracy of strapdown inertial navigation system (SINS) is studied. The particular technique discussed involves the continuous rotation of gyros and accelerometers cluster about the vertical axis of the vehicle. Then the errors of these sensors will have periodic variation corresponding to components along the body frame. Under this condition, the modulated sensor errors produce reduced system errors. Theoretical analysis based on a new coordinate system defined as sensing frame and test results are presented, and they indicate the method attenuates the navigation errors brought by the gyros' random constant drift and the accelerometer's bias and their white noise compared to the conventional method.

  14. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    Science.gov (United States)

    Garmay, Andrey V.; Oskolok, Kirill V.

    2017-04-01

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  15. Improving the accuracy of density-functional theory calculation: the genetic algorithm and neural network approach.

    Science.gov (United States)

    Li, Hui; Shi, LiLi; Zhang, Min; Su, Zhongmin; Wang, XiuJun; Hu, LiHong; Chen, GuanHua

    2007-04-14

    The combination of genetic algorithm and neural network approach (GANN) has been developed to improve the calculation accuracy of density functional theory. As a demonstration, this combined quantum mechanical calculation and GANN correction approach has been applied to evaluate the optical absorption energies of 150 organic molecules. The neural network approach reduces the root-mean-square (rms) deviation of the calculated absorption energies of 150 organic molecules from 0.47 to 0.22 eV for the TDDFTB3LYP6-31G(d) calculation, and the newly developed GANN correction approach reduces the rms deviation to 0.16 eV.

  16. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    Science.gov (United States)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  17. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    Science.gov (United States)

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2016-08-29

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute settingMethods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented.Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care.Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted.What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type.What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting.What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has the

  18. Improving accuracy for cancer classification with a new algorithm for genes selection

    Directory of Open Access Journals (Sweden)

    Zhang Hongyan

    2012-11-01

    Full Text Available Abstract Background Even though the classification of cancer tissue samples based on gene expression data has advanced considerably in recent years, it faces great challenges to improve accuracy. One of the challenges is to establish an effective method that can select a parsimonious set of relevant genes. So far, most methods for gene selection in literature focus on screening individual or pairs of genes without considering the possible interactions among genes. Here we introduce a new computational method named the Binary Matrix Shuffling Filter (BMSF. It not only overcomes the difficulty associated with the search schemes of traditional wrapper methods and overfitting problem in large dimensional search space but also takes potential gene interactions into account during gene selection. This method, coupled with Support Vector Machine (SVM for implementation, often selects very small number of genes for easy model interpretability. Results We applied our method to 9 two-class gene expression datasets involving human cancers. During the gene selection process, the set of genes to be kept in the model was recursively refined and repeatedly updated according to the effect of a given gene on the contributions of other genes in reference to their usefulness in cancer classification. The small number of informative genes selected from each dataset leads to significantly improved leave-one-out (LOOCV classification accuracy across all 9 datasets for multiple classifiers. Our method also exhibits broad generalization in the genes selected since multiple commonly used classifiers achieved either equivalent or much higher LOOCV accuracy than those reported in literature. Conclusions Evaluation of a gene’s contribution to binary cancer classification is better to be considered after adjusting for the joint effect of a large number of other genes. A computationally efficient search scheme was provided to perform effective search in the extensive

  19. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    Science.gov (United States)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  20. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  1. Improving Intensity-Based Lung CT Registration Accuracy Utilizing Vascular Information

    Directory of Open Access Journals (Sweden)

    Kunlin Cao

    2012-01-01

    Full Text Available Accurate pulmonary image registration is a challenging problem when the lungs have a deformation with large distance. In this work, we present a nonrigid volumetric registration algorithm to track lung motion between a pair of intrasubject CT images acquired at different inflation levels and introduce a new vesselness similarity cost that improves intensity-only registration. Volumetric CT datasets from six human subjects were used in this study. The performance of four intensity-only registration algorithms was compared with and without adding the vesselness similarity cost function. Matching accuracy was evaluated using landmarks, vessel tree, and fissure planes. The Jacobian determinant of the transformation was used to reveal the deformation pattern of local parenchymal tissue. The average matching error for intensity-only registration methods was on the order of 1 mm at landmarks and 1.5 mm on fissure planes. After adding the vesselness preserving cost function, the landmark and fissure positioning errors decreased approximately by 25% and 30%, respectively. The vesselness cost function effectively helped improve the registration accuracy in regions near thoracic cage and near the diaphragm for all the intensity-only registration algorithms tested and also helped produce more consistent and more reliable patterns of regional tissue deformation.

  2. Improvement of Measurement Accuracy of Coolant Flow in a Test Loop

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Jintae; Kim, Jong-Bum; Joung, Chang-Young; Ahn, Sung-Ho; Heo, Sung-Ho; Jang, Seoyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this study, to improve the measurement accuracy of coolant flow in a coolant flow simulator, elimination of external noise are enhanced by adding ground pattern in the control panel and earth around signal cables. In addition, a heating unit is added to strengthen the fluctuation signal by heating the coolant because the source of signals are heat energy. Experimental results using the improved system shows good agreement with the reference flow rate. The measurement error is reduced dramatically compared with the previous measurement accuracy and it will help to analyze the performance of nuclear fuels. For further works, out of pile test will be carried out by fabricating a test rig mockup and inspect the feasibility of the developed system. To verify the performance of a newly developed nuclear fuel, irradiation test needs to be carried out in the research reactor and measure the irradiation behavior such as fuel temperature, fission gas release, neutron dose, coolant temperature, and coolant flow rate. In particular, the heat generation rate of nuclear fuels can be measured indirectly by measuring temperature variation of coolant which passes by the fuel rod and its flow rate. However, it is very difficult to measure the flow rate of coolant at the fuel rod owing to the narrow gap between components of the test rig. In nuclear fields, noise analysis using thermocouples in the test rig has been applied to measure the flow velocity of coolant which circulates through the test loop.

  3. Evaluating an educational intervention to improve the accuracy of death certification among trainees from various specialties

    Directory of Open Access Journals (Sweden)

    Villar Jesús

    2007-11-01

    Full Text Available Abstract Background The inaccuracy of death certification can lead to the misallocation of resources in health care programs and research. We evaluated the rate of errors in the completion of death certificates among medical residents from various specialties, before and after an educational intervention which was designed to improve the accuracy in the certification of the cause of death. Methods A 90-min seminar was delivered to seven mixed groups of medical trainees (n = 166 from several health care institutions in Spain. Physicians were asked to read and anonymously complete a same case-scenario of death certification before and after the seminar. We compared the rates of errors and the impact of the educational intervention before and after the seminar. Results A total of 332 death certificates (166 completed before and 166 completed after the intervention were audited. Death certificates were completed with errors by 71.1% of the physicians before the educational intervention. Following the seminar, the proportion of death certificates with errors decreased to 9% (p Conclusion Major errors in the completion of the correct cause of death on death certificates are common among medical residents. A simple educational intervention can dramatically improve the accuracy in the completion of death certificates by physicians.

  4. Development of a method of ICP algorithm accuracy improvement during shaped profiles and surfaces control

    Directory of Open Access Journals (Sweden)

    V.A. Pechenin

    2014-10-01

    Full Text Available In this paper we propose a method of improvement of operating accuracy of iterative closest point algorithm used for metrology problems solving when determining a location deviation. Compressor blade profiles of a gas turbine engine (GTE were used as an object for application of the method of deviation determining. It is proposed to formulate the problem of the best alignment in the developed method as a multiobjective problem including criteria of minimum of squared distances, normal vectors differences and depth of camber differences at corresponding points of aligned profiles. Variants of resolving the task using an integral criterion including the above-mentioned were considered. Optimization problems were solved using a quasi- Newton method of sequential quadratic programming. The proposed new method of improvement of the registration algorithm based on geometric features showed greater accuracy in comparison with the discussed methods of optimization of a distance between fitting points, especially if a small quantity of measurement points on the profiles was used.

  5. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    Directory of Open Access Journals (Sweden)

    Donghwan Yoon

    2016-06-01

    Full Text Available The position accuracy of Global Navigation Satellite System (GNSS modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  6. Sharp Chandra View of ROSAT All-Sky Survey Bright Sources: I. Improvement of Positional Accuracy

    CERN Document Server

    Gao, Shuang; Liu, Jifeng

    2016-01-01

    The ROSAT All-Sky Survey (RASS) represents one of the most complete and sensitive soft X-ray all-sky surveys to date. However, the deficient positional accuracy of the RASS Bright Source Catalog (BSC) and subsequent lack of firm optical identifications affect the multi-wavelength studies of X-ray sources. The widely used positional errors $\\sigma_{pos}$ based on the Tycho Stars Catalog (Tycho-1) have previously been applied for identifying objects in the optical band. The considerably sharper Chandra view covers a fraction of RASS sources, whose $\\sigma_{pos}$ could be improved by utilizing the sub-arcsec positional accuracy of Chandra observations. We cross-match X-ray objects between the BSC and \\emph{Chandra} sources extracted from the Advanced CCD Imaging Spectrometer (ACIS) archival observations. A combined counterparts list (BSCxACIS) with \\emph{Chandra} spatial positions weighted by the X-ray flux of multi-counterparts is employed to evaluate and improve the former identifications of BSC with the other...

  7. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines

    Science.gov (United States)

    Sahadevan, Anand S.; Routray, Aurobinda; Das, Bhabani S.; Ahmad, Saquib

    2016-04-01

    Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS-Salinas, ROSIS-Pavia University, and AVIRIS-Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

  8. Case studies on forecasting for innovative technologies: frequent revisions improve accuracy.

    Science.gov (United States)

    Lerner, Jeffrey C; Robertson, Diane C; Goldstein, Sara M

    2015-02-01

    Health technology forecasting is designed to provide reliable predictions about costs, utilization, diffusion, and other market realities before the technologies enter routine clinical use. In this article we address three questions central to forecasting's usefulness: Are early forecasts sufficiently accurate to help providers acquire the most promising technology and payers to set effective coverage policies? What variables contribute to inaccurate forecasts? How can forecasters manage the variables to improve accuracy? We analyzed forecasts published between 2007 and 2010 by the ECRI Institute on four technologies: single-room proton beam radiation therapy for various cancers; digital breast tomosynthesis imaging technology for breast cancer screening; transcatheter aortic valve replacement for serious heart valve disease; and minimally invasive robot-assisted surgery for various cancers. We then examined revised ECRI forecasts published in 2013 (digital breast tomosynthesis) and 2014 (the other three topics) to identify inaccuracies in the earlier forecasts and explore why they occurred. We found that five of twenty early predictions were inaccurate when compared with the updated forecasts. The inaccuracies pertained to two technologies that had more time-sensitive variables to consider. The case studies suggest that frequent revision of forecasts could improve accuracy, especially for complex technologies whose eventual use is governed by multiple interactive factors.

  9. A statistical methodology to improve accuracy in differentiating schizophrenia patients from healthy controls.

    Science.gov (United States)

    Peters, Rosalind M; Gjini, Klevest; Templin, Thomas N; Boutros, Nash N

    2014-05-30

    We present a methodology to statistically discriminate among univariate and multivariate indices to improve accuracy in differentiating schizophrenia patients from healthy controls. Electroencephalogram data from 71 subjects (37 controls/34 patients) were analyzed. Data included P300 event-related response amplitudes and latencies as well as amplitudes and sensory gating indices derived from the P50, N100, and P200 auditory-evoked responses resulting in 20 indices analyzed. Receiver operator characteristic (ROC) curve analyses identified significant univariate indices; these underwent principal component analysis (PCA). Logistic regression of PCA components created a multivariate composite used in the final ROC. Eleven univariate ROCs were significant with area under the curve (AUC) >0.50. PCA of these indices resulted in a three-factor solution accounting for 76.96% of the variance. The first factor was defined primarily by P200 and P300 amplitudes, the second by P50 ratio and difference scores, and the third by P300 latency. ROC analysis using the logistic regression composite resulted in an AUC of 0.793 (0.06), p<0.001 (CI=0.685-0.901). A composite score of 0.456 had a sensitivity of 0.829 (correctly identifying schizophrenia patients) and a specificity of 0.703 (correctly identifying healthy controls). Results demonstrated the usefulness of combined statistical techniques in creating a multivariate composite that improves diagnostic accuracy.

  10. Improving the accuracy of mirror measurements by removing noise and lens distortion

    Science.gov (United States)

    Wang, Zhenzhou

    2016-11-01

    Telescope mirrors determine the imaging quality and observation ability of telescopes. Unfortunately, manufacturing highly accurate mirrors remains a bottleneck problem in space optics. One main factor is the lack of a technique for measuring the 3D shapes of mirrors accurately for inverse engineering. Researchers have studied and developed techniques for testing the quality of telescope mirrors and methods for measuring the 3D shapes of mirrors for centuries. Among these, interferometers have become popular in evaluating the surface errors of manufactured mirrors. However, interferometers are unable to measure some important mirror parameters directly and accurately, e.g. the paraxial radius, geometry dimension and eccentric errors, and these parameters are essential for mirror manufacturing. In this paper, we aim to remove the noise and lens distortion inherent in the system to improve the accuracy of a previously proposed one-shot projection mirror measurement method. To this end, we propose a ray modeling and a pattern modeling method. The experimental results show that the proposed ray modeling and pattern modeling method can improve the accuracy of the one-shot projection method significantly, making it feasible as a commercial device to measure the shapes of mirrors quantitatively and accurately.

  11. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    Science.gov (United States)

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl‑ electrodes, 10 F‑ electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  12. Improvement of three-dimensional microstructure contour accuracy using maskless lithography technique based on DMD

    Science.gov (United States)

    Huang, Shengzhou; Li, Mujun; Shen, Lianguan; Qiu, Jinfeng; Zhou, Youquan

    2016-10-01

    A novel method is proposed to improve contour accuracy of three-dimensional (3D) microstructure in real-time maskless lithography technique based on a digital micro-mirror device (DMD). In this paper, firstly according to the study of theory and experiment on exposure doses and exposure thickness relation, the spatial distribution of the photo-resist exposure doses was derived, which could predict the resulting 3D contour. Secondly, an equal-arc slicing strategy was adopted, in which arc lengths between adjacent slicing point are kept constant while layer heights become variant. And an equal-arc-mean slicing strategy that takes the average of adjacent layers height was also proposed to further optimize the quality of contour and reduce the contour error on the basis of the equal-arc slicing. Finally, to estimate the validity of the method and as a study case, aspheric micro-lens array were fabricated with proposed method in experiments. Our results showed that the proposed method is feasible for improving and enhancing the 3D microstructure contour accuracy and smoothness.

  13. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    Science.gov (United States)

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  14. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations.

    Science.gov (United States)

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-05-26

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices.

  15. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations

    Directory of Open Access Journals (Sweden)

    Dong Hyun Kim

    2016-05-01

    Full Text Available Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF. To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°. The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices.

  16. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics.

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T; Henriksen, Niel M; Gilson, Michael K

    2015-08-13

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by nonoptimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery.

  17. Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy

    Directory of Open Access Journals (Sweden)

    Changho Lee

    2013-03-01

    Full Text Available The International Civil Aviation Organization (ICAO has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS, Automatic Dependent Surveillance-Broadcast (ADS-B, multilateration (MLAT and wide-area multilateration (WAM systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  18. Improved Accuracy of PSO and DE using Normalization: an Application to Stock Price Prediction

    Directory of Open Access Journals (Sweden)

    Savinderjit Kaur

    2012-09-01

    Full Text Available Data Mining is being actively applied to stock market since 1980s. It has been used to predict stock prices, stock indexes, for portfolio management, trend detection and for developing recommender systems. The various algorithms which have been used for the same include ANN, SVM, ARIMA, GARCH etc. Different hybrid models have been developed by combining these algorithms with other algorithms like roughest, fuzzy logic, GA, PSO, DE, ACO etc. to improve the efficiency. This paper proposes DE-SVM model (Differential Evolution- Support vector Machine for stock price prediction. DE has been used to select best free parameters combination for SVM to improve results. The paper also compares the results of prediction with the outputs of SVM alone and PSO-SVM model (Particle Swarm Optimization. The effect of normalization of data on the accuracy of prediction has also been studied.

  19. Effectiveness of Practices for Improving the Diagnostic Accuracy of Non-ST Elevation Myocardial Infarction in the Emergency Department: A Laboratory Medicine Best Practices Systematic Review

    Science.gov (United States)

    Layfield, Christopher; Rose, John; Alford, Aaron; Snyder, Susan R.; Apple, Fred S.; Chowdhury, Farah M.; Kontos, Michael C.; Newby, L. Kristin; Storrow, Alan B.; Tanasijevic, Milenko; Leibach, Elizabeth; Liebow, Edward B.; Christenson, Robert H.

    2016-01-01

    Objectives This article presents evidence from a systematic review of the effectiveness of four practices (assay selection, decision point cardiac troponin (cTn) threshold selection, serial testing, and point of care testing) for improving the diagnostic accuracy for Non-ST-Segment Elevation Myocardial Infarction (NSTEMI) in the Emergency Department. Design and Methods The CDC-funded Laboratory Medicine Best Practices (LMBP™) Initiative systematic review A6 Method for Laboratory Best Practices was used. Results The current guidelines (e.g., ACC/AHA) recommend using cardiac troponin assays with a 99th percentile upper reference limit (URL) diagnostic threshold to diagnose NSTEMI. The evidence in this systematic review indicates that contemporary sensitive cTn assays meet the assay profile requirements (sensitivity, specificity, PPV, and NPV) to more accurately diagnose NSTEMI than alternate tests. Additional biomarkers did not increase diagnostic effectiveness of cTn assays. Sensitivity, specificity, and negative predictive value (NPV) were consistently high and low positive predictive value (PPV) improved with serial sampling. Evidence for use of cTn point of care testing (POCT) was insufficient to make recommendations, though some evidence suggests cTn POCT may result in reduction to patient length of stay and costs. Conclusions Two best practice recommendations emerged from the systematic review and meta-analysis of literature conducted using the LMBP™ A6 Method criteria: Testing with cardiac troponin assays, using the 99th percentile URL as the clinical diagnostic threshold for the diagnosis of NSTEMI and without additional biomarkers, is recommended. Also recommended is serial cardiac troponin sampling with one sample at presentation and at least one additional sample taken a minimum of 6 hours later to identify a rise or fall in the troponin level. Testing with high-sensitivity cardiac troponin assays, at presentation and again within 6 hours, is the

  20. Iterative metal artifact reduction improves dose calculation accuracy. Phantom study with dental implants

    Energy Technology Data Exchange (ETDEWEB)

    Maerz, Manuel; Mittermair, Pia; Koelbl, Oliver; Dobler, Barbara [Regensburg University Medical Center, Department of Radiotherapy, Regensburg (Germany); Krauss, Andreas [Siemens Healthcare GmbH, Forchheim (Germany)

    2016-06-15

    Metallic dental implants cause severe streaking artifacts in computed tomography (CT) data, which affect the accuracy of dose calculations in radiation therapy. The aim of this study was to investigate the benefit of the metal artifact reduction algorithm iterative metal artifact reduction (iMAR) in terms of correct representation of Hounsfield units (HU) and dose calculation accuracy. Heterogeneous phantoms consisting of different types of tissue equivalent material surrounding metallic dental implants were designed. Artifact-containing CT data of the phantoms were corrected using iMAR. Corrected and uncorrected CT data were compared to synthetic CT data to evaluate accuracy of HU reproduction. Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) plans were calculated in Oncentra v4.3 on corrected and uncorrected CT data and compared to Gafchromic trademark EBT3 films to assess accuracy of dose calculation. The use of iMAR increased the accuracy of HU reproduction. The average deviation of HU decreased from 1006 HU to 408 HU in areas including metal and from 283 HU to 33 HU in tissue areas excluding metal. Dose calculation accuracy could be significantly improved for all phantoms and plans: The mean passing rate for gamma evaluation with 3 % dose tolerance and 3 mm distance to agreement increased from 90.6 % to 96.2 % if artifacts were corrected by iMAR. The application of iMAR allows metal artifacts to be removed to a great extent which leads to a significant increase in dose calculation accuracy. (orig.) [German] Metallische Implantate verursachen streifenfoermige Artefakte in CT-Bildern, welche die Dosisberechnung beeinflussen. In dieser Studie soll der Nutzen des iterativen Metall-Artefakt-Reduktions-Algorithmus iMAR hinsichtlich der Wiedergabetreue von Hounsfield-Werten (HU) und der Genauigkeit von Dosisberechnungen untersucht werden. Es wurden heterogene Phantome aus verschiedenen Arten gewebeaequivalenten Materials mit

  1. Increasing cutaneous afferent feedback improves proprioceptive accuracy at the knee in patients with sensory ataxia.

    Science.gov (United States)

    Macefield, Vaughan G; Norcliffe-Kaufmann, Lucy; Goulding, Niamh; Palma, Jose-Alberto; Fuente Mora, Cristina; Kaufmann, Horacio

    2016-02-01

    Hereditary sensory and autonomic neuropathy type III (HSAN III) features disturbed proprioception and a marked ataxic gait. We recently showed that joint angle matching error at the knee is positively correlated with the degree of ataxia. Using intraneural microelectrodes, we also documented that these patients lack functional muscle spindle afferents but have preserved large-diameter cutaneous afferents, suggesting that patients with better proprioception may be relying more on proprioceptive cues provided by tactile afferents. We tested the hypothesis that enhancing cutaneous sensory feedback by stretching the skin at the knee joint using unidirectional elasticity tape could improve proprioceptive accuracy in patients with a congenital absence of functional muscle spindles. Passive joint angle matching at the knee was used to assess proprioceptive accuracy in 25 patients with HSAN III and 9 age-matched control subjects, with and without taping. Angles of the reference and indicator knees were recorded with digital inclinometers and the absolute error, gradient, and correlation coefficient between the two sides calculated. Patients with HSAN III performed poorly on the joint angle matching test [mean matching error 8.0 ± 0.8° (±SE); controls 3.0 ± 0.3°]. Following application of tape bilaterally to the knee in an X-shaped pattern, proprioceptive performance improved significantly in the patients (mean error 5.4 ± 0.7°) but not in the controls (3.0 ± 0.2°). Across patients, but not controls, significant increases in gradient and correlation coefficient were also apparent following taping. We conclude that taping improves proprioception at the knee in HSAN III, presumably via enhanced sensory feedback from the skin.

  2. The Effect of Written Corrective Feedback on Grammatical Accuracy of EFL Students: An Improvement over Previous Unfocused Designs

    Science.gov (United States)

    Khanlarzadeh, Mobin; Nemati, Majid

    2016-01-01

    The effectiveness of written corrective feedback (WCF) in the improvement of language learners' grammatical accuracy has been a topic of interest in SLA studies for the past couple of decades. The present study reports the findings of a three-month study investigating the effect of direct unfocused WCF on the grammatical accuracy of elementary…

  3. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  4. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    Science.gov (United States)

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  5. Applicability and accuracy improvement of transient elastography using the M and XL probes by experienced operators.

    Science.gov (United States)

    Carrión, J A; Puigvehí, M; Coll, S; Garcia-Retortillo, M; Cañete, N; Fernández, R; Márquez, C; Giménez, M D; Garcia, M; Bory, F; Solà, R

    2015-03-01

    Transient elastography (TE) is the reference method to obtain liver stiffness measurements (LSM), but no results are obtained in 3.1% and unreliable in 15.8%. We assessed the applicability and diagnostic accuracy of TE re-evaluation using M and XL probes. From March 2011 to April 2012 868 LSM were performed with the M probe by trained operators (50-500 studies) (LSM1). Measurements were categorized as inadequate (no values or ratio 30%) or adequate. Inadequate LSM1 were re-evaluated by experienced operators (>500 explorations) (LSM2) and inadequate LSM2 using XL probe (LSMXL). Inadequate LSM1 were obtained in 187 (21.5%) patients, IQR/LSM >30% in 97 (51%), ratio <60% in 24 (13%) and TE failed to obtain a measurement in 67 (36%). LSM2 achieved adequate registers in 123 (70%) of 175 registers previously considered as inadequate. Independent variables (OR, 95%CI) related to inadequate LSM1 were body mass index (1.11, 1.04-1.18), abdominal circumference (1.03, 1.01-1.06) and age (1.03, 1.01-1.04) and to inadequate LSM2 were skin-capsule distance (1.21, 1.09-1.34) and abdominal circumference (1.05, 1.01-1.10). The diagnostic accuracy (AUROC) to identify significant fibrosis improved from 0.89 (LSM1) to 0.91 (LSM2) (P = 0.046) in 334 patients with liver biopsy or clinically significant portal hypertension. A third evaluation (LSMXL) obtained adequate registers in 41 (93%) of 44 patients with inadequate LSM2. Operator experience increases the applicability and diagnostic accuracy of TE. The XL probe may be recommended for patients with inadequate values obtained by experienced operators using the M probe. http://clinicaltrials.gov (NCT01900808).

  6. Knee joint secondary motion accuracy improved by quaternion-based optimizer with bony landmark constraints.

    Science.gov (United States)

    Wang, Hongsheng; Zheng, Naiqaun Nigel

    2010-12-01

    Skin marker-based motion analysis has been widely used in biomechanical studies and clinical applications. Unfortunately, the accuracy of knee joint secondary motions is largely limited by the nonrigidity nature of human body segments. Numerous studies have investigated the characteristics of soft tissue movement. Utilizing these characteristics, we may improve the accuracy of knee joint motion measurement. An optimizer was developed by incorporating the soft tissue movement patterns at special bony landmarks into constraint functions. Bony landmark constraints were assigned to the skin markers at femur epicondyles, tibial plateau edges, and tibial tuberosity in a motion analysis algorithm by limiting their allowed position space relative to the underlying bone. The rotation matrix was represented by quaternion, and the constrained optimization problem was solved by Fletcher's version of the Levenberg-Marquardt optimization technique. The algorithm was validated by using motion data from both skin-based markers and bone-mounted markers attached to fresh cadavers. By comparing the results with the ground truth bone motion generated from the bone-mounted markers, the new algorithm had a significantly higher accuracy (root-mean-square (RMS) error: 0.7 ± 0.1 deg in axial rotation and 0.4 ± 0.1 deg in varus-valgus) in estimating the knee joint secondary rotations than algorithms without bony landmark constraints (RMS error: 1.7 ± 0.4 deg in axial rotation and 0.7 ± 0.1 deg in varus-valgus). Also, it predicts a more accurate medial-lateral translation (RMS error: 0.4 ± 0.1 mm) than the conventional techniques (RMS error: 1.2 ± 0.2 mm). The new algorithm, using bony landmark constrains, estimates more accurate secondary rotations and medial-lateral translation of the underlying bone.

  7. Limits of diagnostic accuracy of anti-hepatitis C virus antibodies detection by ELISA and immunoblot assay.

    Science.gov (United States)

    Suslov, Anatoly P; Kuzin, Stanislav N; Golosova, Tatiana V; Shalunova, Nina V; Malyshev, Nikolai A; Sadikova, Natalia V; Vavilova, Lubov M; Somova, Anna V; Musina, Elena E; Ivanova, Maria V; Kipor, Tatiana T; Timonin, Igor M; Kuzina, Lubov E; Godkov, Mihail A; Bajenov, Alexei I; Nesterenko, Vladimir G

    2002-07-01

    When human sera samples are tested for anti-hepatitis C virus (HCV) antibodies using different ELISA kits as well as immunoblot assay kits discrepant results often occur. As a result the diagnostics of HCV infection in such sera remains unclear. The purpose of this investigation is to define the limits of HCV serodiagnostics. Overall 7 different test kits of domestic and foreign manufacturers were used for the sampled sera testing. Preliminary comparative study, using seroconversion panels PHV905, PHV907, PHV908 was performed and reference kit was chosen (Murex anti-HCV version 4) as the most sensitive kit on the base of this study results. Overall 1640 sera samples have been screened using different anti-HCV ELISA kits and 667 of them gave discrepant results in at least two kits. These sera were then tested using three anti-HCV ELISA kits (first set of 377 samples) or four anti-HCV ELISA kits (second set of 290 samples) at the conditions of reference laboratory. In the first set 17.2% samples remained discrepant and in the second set - 13.4%. "Discrepant" sera were further tested in RIBA 3.0 and INNO-LIA immunoblot confirmatory assays, but approximately 5-7% of them remained undetermined after all the tests. For the samples with signal-to-cutoff ratio higher than 3.0 high rate of result consistency by reference, ELISA routing and INNO-LIA immunoblot assay was observed. On the other hand the results of tests 27 "problematic" sera in RIBA 3.0 and INNO-LIA were consistent only in 55.5% cases. Analysis of the antigen spectrum reactive with antibodies in "problematic" sera, demonstrated predominance of Core, NS3 and NS4 antigens for sera, positive in RIBA 3.0 and Core and NS3 antigens for sera, positive in INNO-LIA. To overcome the problem of undetermined sera, methods based on other principles, as well as alternative criteria of HCV infection diagnostics are discussed.

  8. Considerations for improving assay sensitivity in chronic pain clinical trials: IMMPACT recommendations.

    Science.gov (United States)

    Dworkin, Robert H; Turk, Dennis C; Peirce-Sandner, Sarah; Burke, Laurie B; Farrar, John T; Gilron, Ian; Jensen, Mark P; Katz, Nathaniel P; Raja, Srinivasa N; Rappaport, Bob A; Rowbotham, Michael C; Backonja, Misha-Miroslav; Baron, Ralf; Bellamy, Nicholas; Bhagwagar, Zubin; Costello, Ann; Cowan, Penney; Fang, Weikai Christopher; Hertz, Sharon; Jay, Gary W; Junor, Roderick; Kerns, Robert D; Kerwin, Rosemary; Kopecky, Ernest A; Lissin, Dmitri; Malamut, Richard; Markman, John D; McDermott, Michael P; Munera, Catherine; Porter, Linda; Rauschkolb, Christine; Rice, Andrew S C; Sampaio, Cristina; Skljarevski, Vladimir; Sommerville, Kenneth; Stacey, Brett R; Steigerwald, Ilona; Tobias, Jeffrey; Trentacosti, Ann Marie; Wasan, Ajay D; Wells, George A; Williams, Jim; Witter, James; Ziegler, Dan

    2012-06-01

    A number of pharmacologic treatments examined in recent randomized clinical trials (RCTs) have failed to show statistically significant superiority to placebo in conditions in which their efficacy had previously been demonstrated. Assuming the validity of previous evidence of efficacy and the comparability of the patients and outcome measures in these studies, such results may be a consequence of limitations in the ability of these RCTs to demonstrate the benefits of efficacious analgesic treatments vs placebo ("assay sensitivity"). Efforts to improve the assay sensitivity of analgesic trials could reduce the rate of falsely negative trials of efficacious medications and improve the efficiency of analgesic drug development. Therefore, an Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials consensus meeting was convened in which the assay sensitivity of chronic pain trials was reviewed and discussed. On the basis of this meeting and subsequent discussions, the authors recommend consideration of a number of patient, study design, study site, and outcome measurement factors that have the potential to affect the assay sensitivity of RCTs of chronic pain treatments. Increased attention to and research on methodological aspects of clinical trials and their relationships with assay sensitivity have the potential to provide the foundation for an evidence-based approach to the design of analgesic clinical trials and expedite the identification of analgesic treatments with improved efficacy and safety.

  9. Evaluation of an improved bioluminescence assay for the detection of bacteria in soy milk.

    Science.gov (United States)

    Shinozaki, Yohei; Sato, Jun; Igarashi, Toshinori; Suzuki, Shigeya; Nishimoto, Kazunori; Harada, Yasuhiro

    2013-01-01

    Because soy milk is nutrient rich and nearly neutral in pH, it favors the growth of microbial contaminants. To ensure that soy milk meets food-safety standards, it must be pasteurized and have its sterility confirmed. ATP bioluminescence assay has become a widely accepted means of detecting food microorganisms. However, the high background bioluminescence intensity of soy milk has rendered it unsuitable for ATP analysis. Here, we tested the efficacy of an improved pre-treated bioluminescence assay on soy milk. By comparing background bioluminescence intensities obtained by the conventional and improved methods, we demonstrated that our method significantly reduces soy milk background bioluminescence. The dose-response curve of the assay was tested with serial dilutions of Bacillus sp. culture. An extremely strong log-linear relation between the bioluminescence intensity relative light units and colony formation units CFU/ml emerged for the tested strain. The detection limit of the assay was estimated as 5.2×10(3) CFU/ml from the dose-response curve and an imposed signal limit was three times the background level. The results showed that contaminated samples could be easily detected within 24 h using our improved bioluminescence assay.

  10. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium.

    Science.gov (United States)

    Ramstein, Guillaume P; Evans, Joseph; Kaeppler, Shawn M; Mitchell, Robert B; Vogel, Kenneth P; Buell, C Robin; Casler, Michael D

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families' parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  11. Improving accuracy of protein-protein interaction prediction by considering the converse problem for sequence representation

    Directory of Open Access Journals (Sweden)

    Wang Yong

    2011-10-01

    Full Text Available Abstract Background With the development of genome-sequencing technologies, protein sequences are readily obtained by translating the measured mRNAs. Therefore predicting protein-protein interactions from the sequences is of great demand. The reason lies in the fact that identifying protein-protein interactions is becoming a bottleneck for eventually understanding the functions of proteins, especially for those organisms barely characterized. Although a few methods have been proposed, the converse problem, if the features used extract sufficient and unbiased information from protein sequences, is almost untouched. Results In this study, we interrogate this problem theoretically by an optimization scheme. Motivated by the theoretical investigation, we find novel encoding methods for both protein sequences and protein pairs. Our new methods exploit sufficiently the information of protein sequences and reduce artificial bias and computational cost. Thus, it significantly outperforms the available methods regarding sensitivity, specificity, precision, and recall with cross-validation evaluation and reaches ~80% and ~90% accuracy in Escherichia coli and Saccharomyces cerevisiae respectively. Our findings here hold important implication for other sequence-based prediction tasks because representation of biological sequence is always the first step in computational biology. Conclusions By considering the converse problem, we propose new representation methods for both protein sequences and protein pairs. The results show that our method significantly improves the accuracy of protein-protein interaction predictions.

  12. Error-corrected AFM: a simple and broadly applicable approach for substantially improving AFM image accuracy.

    Science.gov (United States)

    Bosse, James L; Huey, Bryan D

    2014-04-18

    Atomic force microscopy (AFM) has become an indispensable tool for imaging the topography and properties of surfaces at the nanoscale. A ubiquitous problem, however, is that optimal accuracy demands smooth surfaces, slow scanning, and expert users, contrary to many AFM applications and practical use patterns. Accordingly, a simple correction to AFM topographic images is implemented, incorporating error signals such as deflection and/or amplitude data that have long been available but quantitatively underexploited. This is demonstrated to substantially improve both height and lateral accuracy for expert users, with a corresponding 3-5 fold decrease in image error. Common image artifacts due to inexperienced AFM use, generally poorly scanned surfaces, or high speed images acquired in as fast as 7 s, are also shown to be effectively rectified, returning results equivalent to standard 'expert-user' images. This concept is proven for contact mode AFM, AC-mode, and high speed imaging, as well as property mapping such as phase contrast, with obvious extensions to many specialized AFM variations as well. Conveniently, as this correction procedure is based on either real time or post-processing, it is easily employed for future as well as legacy AFM systems and data. Such error-corrected AFM therefore offers a simple, broadly applicable approach for more accurate, more efficient, and more user-friendly implementation of AFM for nanoscale topography and property mapping.

  13. A metrological approach to improve accuracy and reliability of ammonia measurements in ambient air

    Science.gov (United States)

    Pogány, Andrea; Balslev-Harder, David; Braban, Christine F.; Cassidy, Nathan; Ebert, Volker; Ferracci, Valerio; Hieta, Tuomas; Leuenberger, Daiana; Martin, Nicholas A.; Pascale, Céline; Peltola, Jari; Persijn, Stefan; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard

    2016-11-01

    The environmental impacts of ammonia (NH3) in ambient air have become more evident in the recent decades, leading to intensifying research in this field. A number of novel analytical techniques and monitoring instruments have been developed, and the quality and availability of reference gas mixtures used for the calibration of measuring instruments has also increased significantly. However, recent inter-comparison measurements show significant discrepancies, indicating that the majority of the newly developed devices and reference materials require further thorough validation. There is a clear need for more intensive metrological research focusing on quality assurance, intercomparability and validations. MetNH3 (Metrology for ammonia in ambient air) is a three-year project within the framework of the European Metrology Research Programme (EMRP), which aims to bring metrological traceability to ambient ammonia measurements in the 0.5-500 nmol mol-1 amount fraction range. This is addressed by working in three areas: (1) improving accuracy and stability of static and dynamic reference gas mixtures, (2) developing an optical transfer standard and (3) establishing the link between high-accuracy metrological standards and field measurements. In this article we describe the concept, aims and first results of the project.

  14. How could the replica method improve accuracy of performance assessment of channel coding?

    Science.gov (United States)

    Kabashima, Yoshiyuki

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  15. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    Science.gov (United States)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a

  16. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    Science.gov (United States)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  17. The use of imprecise processing to improve accuracy in weather & climate prediction

    Science.gov (United States)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  18. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    Directory of Open Access Journals (Sweden)

    Marta Torralba

    2016-01-01

    Full Text Available Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively.

  19. Performance of a Taqman Assay for Improved Detection and Quantification of Human Rhinovirus Viral Load

    Science.gov (United States)

    Ng, Kim Tien; Chook, Jack Bee; Oong, Xiang Yong; Chan, Yoke Fun; Chan, Kok Gan; Hanafi, Nik Sherina; Pang, Yong Kek; Kamarulzaman, Adeeba; Tee, Kok Keng

    2016-01-01

    Human rhinovirus (HRV) is the major aetiology of respiratory tract infections. HRV viral load assays are available but limitations that affect accurate quantification exist. We developed a one-step Taqman assay using oligonucleotides designed based on a comprehensive list of global HRV sequences. The new oligonucleotides targeting the 5′-UTR region showed high PCR efficiency (E = 99.6%, R2 = 0.996), with quantifiable viral load as low as 2 viral copies/μl. Assay evaluation using an External Quality Assessment (EQA) panel yielded a detection rate of 90%. When tested on 315 human enterovirus-positive specimens comprising at least 84 genetically distinct HRV types/serotypes (determined by the VP4/VP2 gene phylogenetic analysis), the assay detected all HRV species and types, as well as other non-polio enteroviruses. A commercial quantification kit, which failed to detect any of the EQA specimens, produced a detection rate of 13.3% (42/315) among the clinical specimens. Using the improved assay, we showed that HRV sheds in the upper respiratory tract for more than a week following acute infection. We also showed that HRV-C had a significantly higher viral load at 2–7 days after the onset of symptoms (p = 0.001). The availability of such assay is important to facilitate disease management, antiviral development, and infection control. PMID:27721388

  20. Qmerit-calibrated overlay to improve overlay accuracy and device performance

    Science.gov (United States)

    Ullah, Md Zakir; Jazim, Mohamed Fazly Mohamed; Sim, Stella; Lim, Alan; Hiem, Biow; Chuen, Lieu Chia; Ang, Jesline; Lim, Ek Chow; Klein, Dana; Amit, Eran; Volkovitch, Roie; Tien, David; Choi, DongSub

    2015-03-01

    In advanced semiconductor industries, the overlay error budget is getting tighter due to shrinkage in technology. To fulfill the tighter overlay requirements, gaining every nanometer of improved overlay is very important in order to accelerate yield in high-volume manufacturing (HVM) fabs. To meet the stringent overlay requirements and to overcome other unforeseen situations, it is becoming critical to eliminate the smallest imperfections in the metrology targets used for overlay metrology. For standard cases, the overlay metrology recipe is selected based on total measurement uncertainty (TMU). However, under certain circumstances, inaccuracy due to target imperfections can become the dominant contributor to the metrology uncertainty and cannot be detected and quantified by the standard TMU. For optical-based overlay (OBO) metrology targets, mark asymmetry is a common issue which can cause measurement inaccuracy, and it is not captured by standard TMU. In this paper, a new calibration method, Archer Self-Calibration (ASC), has been established successfully in HVM fabs to improve overlay accuracy on image-based overlay (IBO) metrology targets. Additionally, a new color selection methodology has been developed for the overlay metrology recipe as part of this calibration method. In this study, Qmerit-calibrated data has been used for run-to-run control loop at multiple devices. This study shows that color filter can be chosen more precisely with the help of Qmerit data. Overlay stability improved by 10~20% with best color selection, without causing any negative impact to the products. Residual error, as well as overlay mean plus 3-sigma, showed an improvement of up to 20% when Qmerit-calibrated data was used. A 30% improvement was seen in certain electrical data associated with tested process layers.

  1. Avoiding the ventricle : a simple step to improve accuracy of anatomical targeting during deep brain stimulation

    NARCIS (Netherlands)

    Zrinzo, Ludvic; van Hulzen, Arjen L. J.; Gorgulho, Alessandra A.; Limousin, Patricia; Staal, Michiel J.; De Salles, Antonio A. F.; Hariz, Marwan I.

    2009-01-01

    Object. The authors examined the accuracy of anatomical targeting during electrode implantation for deep brain stimulation in functional neurosurgical procedures. Special attention was focused on the impact that ventricular involvement of the electrode trajectory had on targeting accuracy. Methods.

  2. An improved Bathocuproine assay for accurate valence identification and quantification of copper bound by biomolecules.

    Science.gov (United States)

    Chen, Dinglong; Darabedian, Narek; Li, Zhiqiang; Kai, Tianhan; Jiang, Dianlu; Zhou, Feimeng

    2016-03-15

    Copper is an essential metal in all organisms. Reliably quantifying and identifying the copper content and oxidation state is crucial, since the information is essential to understanding protein structure and function. Chromophoric ligands, such as Bathocuproine (BC) and its water-soluble analog, Bathocuproinedisulfonic acid (BCS), preferentially bind Cu(I) over Cu(II), and therefore have been widely used as optical probes to determine the oxidation state of copper bound by biomolecules. However, the BCS assay is commonly misused, leading to erroneous conclusions regarding the role of copper in biological processes. By measuring the redox potential of Cu(II)-BCS2 and conducting UV-vis absorption measurements in the presence of oxidizable amino acids, the thermodynamic origin of the potential artifacts becomes evident. The BCS assay was improved by introducing a strong Cu(II) chelator EDTA prior to the addition of BCS to prevent interference that might arise from Cu(II) present in the sample. The strong Cu(II) chelator rids of all the potential errors inherent in the conventional BCS assay. Applications of the improved assay to peptides and protein containing oxidizable amino acid residues confirm that free Cu(II) no longer leads to artifacts, thereby resolving issues related to this persistently misused colorimetric assay of Cu(I) in biological systems.

  3. An improved haemolytic plaque assay for the detection of cells secreting antibody to bacterial antigens

    DEFF Research Database (Denmark)

    Barington, T; Heilmann, C

    1992-01-01

    Recent advances in the development of conjugate polysaccharide vaccines for human use have stimulated interest in the use of assays detecting antibody-secreting cells (AbSC) with specificity for bacterial antigens. Here we present improved haemolytic plaque-forming cell (PFC) assays detecting Ab......SC with specificity for tetanus and diphtheria toxoid as well as for Haemophilus influenzae type b and pneumococcal capsular polysaccharides. These assays were found to be less time consuming, more economical and yielded 1.9-3.4-fold higher plaque numbers than traditional Jerne-type PFC assays. In the case of anti......-polysaccharide AbSC of the IgG isotype, the increase was as high as 7.4-11.8 times. Evidence is presented that the pronounced improvement in the detection of the latter is due to the presence of aggregating anti-IgG antibody from the beginning of the assay. It is proposed that in the case of low affinity of anti...

  4. Diagnostic accuracy of an IgM enzyme-linked immunosorbent assay and comparison with 2 polymerase chain reactions for early diagnosis of human leptospirosis.

    Science.gov (United States)

    Vanasco, N B; Jacob, P; Landolt, N; Chiani, Y; Schmeling, M F; Cudos, C; Tarabla, H; Lottersberger, J

    2016-04-01

    Enzyme-linked immunosorbent assay (ELISA) tests and polymerase chain reaction (PCR) may play a key role for early detection and treatment of human leptospirosis in developing countries. The aims of this study were to develop and validate an IgM ELISA under field conditions and to compare the diagnostic accuracy among IgG, IgM ELISAs, conventional PCR (cPCR), and real-time PCR (rtPCR) for early detection of human leptospirosis. Overall accuracy of IgM ELISA was sensitivity of 87.9%, specificity of 97.0%, and area under the curve of 0.940. When the 4 methods were compared, IgM ELISA showed the greatest diagnostic accuracy (J=0.6) followed by rtPCR (J=0.4), cPCR (J=0.2) and IgG ELISA (J=0.1). Our results support the use of IgM ELISA and rtPCR for early diagnosis of the disease. Moreover, due to their high specificity, they could be also useful to replace or supplement microscopic agglutination test as a confirmatory test, allowing more confirmations.

  5. Optimal Cutoff and Accuracy of an IgM Enzyme-Linked Immunosorbent Assay for Diagnosis of Acute Scrub Typhus in Northern Thailand: an Alternative Reference Method to the IgM Immunofluorescence Assay.

    Science.gov (United States)

    Blacksell, Stuart D; Lim, Cherry; Tanganuchitcharnchai, Ampai; Jintaworn, Suthatip; Kantipong, Pacharee; Richards, Allen L; Paris, Daniel H; Limmathurotsakul, Direk; Day, Nicholas P J

    2016-06-01

    The enzyme-linked immunosorbent assay (ELISA) has been proposed as an alternative serologic diagnostic test to the indirect immunofluorescence assay (IFA) for scrub typhus. Here, we systematically determine the optimal sample dilution and cutoff optical density (OD) and estimate the accuracy of IgM ELISA using Bayesian latent class models (LCMs). Data from 135 patients with undifferentiated fever were reevaluated using Bayesian LCMs. Every patient was evaluated for the presence of an eschar and tested with a blood culture for Orientia tsutsugamushi, three different PCR assays, and an IgM IFA. The IgM ELISA was performed for every sample at sample dilutions from 1:100 to 1:102,400 using crude whole-cell antigens of the Karp, Kato, and Gilliam strains of O. tsutsugamushi developed by the Naval Medical Research Center. We used Bayesian LCMs to generate unbiased receiver operating characteristic curves and found that the sample dilution of 1:400 was optimal for the IgM ELISA. With the optimal cutoff OD of 1.474 at a sample dilution of 1:400, the IgM ELISA had a sensitivity of 85.7% (95% credible interval [CrI], 77.4% to 86.7%) and a specificity of 98.1% (95% CrI, 97.2% to 100%) using paired samples. For the ELISA, the OD could be determined objectively and quickly, in contrast to the reading of IFA slides, which was both subjective and labor-intensive. The IgM ELISA for scrub typhus has high diagnostic accuracy and is less subjective than the IgM IFA. We suggest that the IgM ELISA may be used as an alternative reference test to the IgM IFA for the serological diagnosis of scrub typhus.

  6. Optimal Cutoff and Accuracy of an IgM Enzyme-Linked Immunosorbent Assay for Diagnosis of Acute Scrub Typhus in Northern Thailand: an Alternative Reference Method to the IgM Immunofluorescence Assay

    Science.gov (United States)

    Blacksell, Stuart D.; Tanganuchitcharnchai, Ampai; Jintaworn, Suthatip; Kantipong, Pacharee; Richards, Allen L.; Day, Nicholas P. J.

    2016-01-01

    The enzyme-linked immunosorbent assay (ELISA) has been proposed as an alternative serologic diagnostic test to the indirect immunofluorescence assay (IFA) for scrub typhus. Here, we systematically determine the optimal sample dilution and cutoff optical density (OD) and estimate the accuracy of IgM ELISA using Bayesian latent class models (LCMs). Data from 135 patients with undifferentiated fever were reevaluated using Bayesian LCMs. Every patient was evaluated for the presence of an eschar and tested with a blood culture for Orientia tsutsugamushi, three different PCR assays, and an IgM IFA. The IgM ELISA was performed for every sample at sample dilutions from 1:100 to 1:102,400 using crude whole-cell antigens of the Karp, Kato, and Gilliam strains of O. tsutsugamushi developed by the Naval Medical Research Center. We used Bayesian LCMs to generate unbiased receiver operating characteristic curves and found that the sample dilution of 1:400 was optimal for the IgM ELISA. With the optimal cutoff OD of 1.474 at a sample dilution of 1:400, the IgM ELISA had a sensitivity of 85.7% (95% credible interval [CrI], 77.4% to 86.7%) and a specificity of 98.1% (95% CrI, 97.2% to 100%) using paired samples. For the ELISA, the OD could be determined objectively and quickly, in contrast to the reading of IFA slides, which was both subjective and labor-intensive. The IgM ELISA for scrub typhus has high diagnostic accuracy and is less subjective than the IgM IFA. We suggest that the IgM ELISA may be used as an alternative reference test to the IgM IFA for the serological diagnosis of scrub typhus. PMID:27008880

  7. Coval: Improving Alignment Quality and Variant Calling Accuracy for Next-Generation Sequencing Data

    Science.gov (United States)

    Kosugi, Shunichi; Natsume, Satoshi; Yoshida, Kentaro; MacLean, Daniel; Cano, Liliana; Kamoun, Sophien; Terauchi, Ryohei

    2013-01-01

    Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in ‘targeted’ alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/. PMID:24116042

  8. Coval: improving alignment quality and variant calling accuracy for next-generation sequencing data.

    Directory of Open Access Journals (Sweden)

    Shunichi Kosugi

    Full Text Available Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in 'targeted' alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/.

  9. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    Science.gov (United States)

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  10. Improving diagnostic accuracy using agent-based distributed data mining system.

    Science.gov (United States)

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  11. Automatic digital filtering for the accuracy improving of a digital holographic measurement system

    Science.gov (United States)

    Matrecano, Marcella; Miccio, Lisa; Persano, Anna; Quaranta, Fabio; Siciliano, Pietro; Ferraro, Pietro

    2014-05-01

    Digital holography (DH) is a well-established interferometric tool in optical metrology allowing the investigation of engineered surface shapes with microscale lateral resolution and nanoscale axial precision. With the advent of charged coupled devices (CCDs) with smaller pixel sizes, high speed computers and greater pixel numbers, DH became a very feasible technology which offers new possibilities for a large variety of applications. DH presents numerous advantages such as the direct access to the phase information, numerical correction of optical aberrations and the ability of a numerical refocusing from a single hologram. Furthermore, as an interferometric method, DH offers both a nodestructive and no-contact approach to very fragile objects combined with flexibility and a high sensitivity to geometric quantities such as thicknesses and displacements. These features recommend it for the solution of many imaging and measurements problems, such as microelectro-optomechanical systems (MEMS/MEOMS) inspection and characterization. In this work, we propose to improve the performance of a DH measurement on MEMS devices, through digital filters. We have developed an automatic procedure, inserted in the hologram reconstruction process, to selectively filter the hologram spectrum. The purpose is to provide very few noisy reconstructed images, thus increasing the accuracy of the conveyed information and measures performed on images. Furthermore, improving the image quality, we aim to make this technique application as simple and as accurate as possible.

  12. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    Science.gov (United States)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  13. Improving the assessment of ICESat water altimetry accuracy accounting for autocorrelation

    Science.gov (United States)

    Abdallah, Hani; Bailly, Jean-Stéphane; Baghdadi, Nicolas; Lemarquand, Nicolas

    2011-11-01

    Given that water resources are scarce and are strained by competing demands, it has become crucial to develop and improve techniques to observe the temporal and spatial variations in the inland water volume. Due to the lack of data and the heterogeneity of water level stations, remote sensing, and especially altimetry from space, appear as complementary techniques for water level monitoring. In addition to spatial resolution and sampling rates in space or time, one of the most relevant criteria for satellite altimetry on inland water is the accuracy of the elevation data. Here, the accuracy of ICESat LIDAR altimetry product is assessed over the Great Lakes in North America. The accuracy assessment method used in this paper emphasizes on autocorrelation in high temporal frequency ICESat measurements. It also considers uncertainties resulting from both in situ lake level reference data. A probabilistic upscaling process was developed. This process is based on several successive ICESat shots averaged in a spatial transect accounting for autocorrelation between successive shots. The method also applies pre-processing of the ICESat data with saturation correction of ICESat waveforms, spatial filtering to avoid measurement disturbance from the land-water transition effects on waveform saturation and data selection to avoid trends in water elevations across space. Initially this paper analyzes 237 collected ICESat transects, consistent with the available hydrometric ground stations for four of the Great Lakes. By adapting a geostatistical framework, a high frequency autocorrelation between successive shot elevation values was observed and then modeled for 45% of the 237 transects. The modeled autocorrelation was therefore used to estimate water elevations at the transect scale and the resulting uncertainty for the 117 transects without trend. This uncertainty was 8 times greater than the usual computed uncertainty, when no temporal correlation is taken into account. This

  14. Improved diagnostic accuracy of lung perfusion imaging using Tc-99m MAA SPECT

    Energy Technology Data Exchange (ETDEWEB)

    O' Donnell, J.K.; Golish, J.A.; Go, R.T.; Risius, B.; Graor, R.A.; MacIntyre, W.J.; Feiglin, D.H.

    1984-01-01

    The addition of emission tomography (SPECT) to pulmonary perfusion imaging should improve diagnostic accuracy by detecting perfusion defects otherwise masked by superimposition of normal lung activity and by reducing problems with interpretation of defects that result from overlying soft tissue or pleural effusions. In order to examine the contribution of SPECT in the scintigraphic evaluation for pulmonary embolus (PE), the authors have obtained both planar and SPECT studies in 94 cases of suspected PE. All studies employed 3-4 mCi of Tc-99m MAA and standard six-view planar image acquisition. SPECT raw data of 64 images were then acquired over a 360 degree transaxial rotation with subsequent computer reconstruction. Xe-133 ventilation studies were performed when clinically indicated and tolerated by the patient. For 19 studies angiographic (AN) correlation was obtained within 24 hours. In 16/19 planar and SPECT both gave a high probability of PE but SPECT gave better segmental localization and showed better agreement with the number of defects seen at AN. In 3 indeterminate planar scans, 2 were low probability with SPECT and had negative AN. The third, a patient with Wegener's vasculitis, remained indeterminate with SPECT and had negative AN. Five patients with PE had repeat planar/SPECT/AN studies to evaluate response to treatment. SPECT correlated better with AN findings in each case. The authors conclude that SPECT perfusion imaging provides better anatomic accuracy for defects representing PE and is the non-invasive technique of choice for documenting response to therapy.

  15. Study on Improvement of Accuracy in Inertial Photogrammetry by Combining Images with Inertial Measurement Unit

    Science.gov (United States)

    Kawasaki, Hideaki; Anzai, Shojiro; Koizumi, Toshio

    2016-06-01

    Inertial photogrammetry is defined as photogrammetry that involves using a camera on which an inertial measurement unit (IMU) is mounted. In inertial photogrammetry, the position and inclination of a shooting camera are calculated using the IMU. An IMU is characterized by error growth caused by time accumulation because acceleration is integrated with respect to time. This study examines the procedure to estimate the position of the camera accurately while shooting using the IMU and the structure from motion (SfM) technology, which is applied in many fields, such as computer vision. When neither the coordinates of the position of the camera nor those of feature points are known, SfM provides a similar positional relationship between the position of the camera and feature points. Therefore, the actual length of positional coordinates is not determined. If the actual length of the position of the camera is unknown, the camera acceleration is obtained by calculating the second order differential of the position of the camera, with respect to the shooting time. The authors had determined the actual length by assigning the position of IMU to the SfM-calculated position. Hence, accuracy decreased because of the error growth, which was the characteristic feature of IMU. In order to solve this problem, a new calculation method was proposed. Using this method, the difference between the IMU-calculated acceleration and the camera-calculated acceleration can be obtained using the method of least squares, and the magnification required for calculating the actual dimension from the position of the camera can be obtained. The actual length can be calculated by multiplying all the SfM point groups by the obtained magnification factor. This calculation method suppresses the error growth, which is due to the time accumulation in IMU, and improves the accuracy of inertial photogrammetry.

  16. Improved Performance of Loop-Mediated Isothermal Amplification Assays via Swarm Priming.

    Science.gov (United States)

    Martineau, Rhett L; Murray, Sarah A; Ci, Shufang; Gao, Weimin; Chao, Shih-Hui; Meldrum, Deirdre R

    2017-01-03

    This work describes an enhancement to the loop-mediated isothermal amplification (LAMP) reaction which results in improved performance. Enhancement is achieved by adding a new set of primers to conventional LAMP reactions. These primers are termed "swarm primers" based on their relatively high concentration and their ability to create new amplicons despite the theoretical lack of single-stranded annealing sites. The primers target a region upstream of the FIP/BIP primer recognition sequences on opposite strands, substantially overlapping F1/B1 sites. Thus, despite the addition of a new primer set to an already complex assay, no significant increase in assay complexity is incurred. Swarm priming is presented for three DNA templates: Lambda phage, Synechocystis sp. PCC 6803 rbcL gene, and human HFE. The results of adding swarm primers to conventional LAMP reactions include increased amplification speed, increased indicator contrast, and increased reaction products. For at least one template, minor improvements in assay repeatability are also shown. In addition, swarm priming is shown to be effective at increasing the reaction speed for RNA amplification via RT-LAMP. Collectively, these results suggest that the addition of swarm primers will likely benefit most if not all existing LAMP assays based on state-of-the-art, six-primer reactions.

  17. A multi breed reference improves genotype imputation accuracy in Nordic Red cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø;

    2012-01-01

    612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check.......39 and 0.87 percent to 1.75, 0.59 and 0.54 percent for respectively Danish, Swedish and Fi nnish Red when going from single national reference to a combined Red reference. The larger error rate in the Danish population was caused by a subgroup of 10 animals showing a large proportion of Holstein genetics...

  18. A multi breed reference improves genotype imputation accuracy in Nordic Red cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø;

    612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check.......39 and 0.87 percent to 1.75, 0.59 and 0.54 percent for respectively Danish, Swedish and Fi nnish Red when going from single national reference to a combined Red reference. The larger error rate in the Danish population was caused by a subgroup of 10 animals showing a large proportion of Holstein genetics...

  19. Improving Estimation Accuracy of Quasars’ Photometric Redshifts by Integration of KNN and SVM

    Science.gov (United States)

    Han, Bo; Ding, Hongpeng; Zhang, Yanxia; Zhao, Yongheng

    2015-08-01

    The massive photometric data collected from multiple large-scale sky surveys offers significant opportunities for measuring distances of many celestial objects by photometric redshifts zphot in a wide coverage of the sky. However, catastrophic failure, an unsolved problem for a long time, exists in the current photometric redshift estimation approaches (such as k-nearest-neighbor). In this paper, we propose a novel two-stage approach by integration of k-nearest-neighbor (KNN) and support vector machine (SVM) methods together. In the first stage, we apply KNN algorithm on photometric data and estimate their corresponding zphot. By analysis, we observe two dense regions with catastrophic failure, one in the range of zphot [0.1,1.1], the other in the range of zphot [1.5,2.5]. In the second stage, we map the photometric multiband input pattern of points falling into the two ranges from original attribute space into high dimensional feature space by Gaussian kernel function in SVM. In the high dimensional feature space, many bad estimation points resulted from catastrophic failure by using simple Euclidean distance computation in KNN can be identified by classification hyperplane SVM and further be applied correction. Experimental results based on SDSS data for quasars showed that the two-stage fusion approach can significantly mitigate catastrophic failure and improve the estimation accuracy of photometric redshift.

  20. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    Science.gov (United States)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  1. Regularised Model Identification Improves Accuracy of Multisensor Systems for Noninvasive Continuous Glucose Monitoring in Diabetes Management

    Directory of Open Access Journals (Sweden)

    Mattia Zanon

    2013-01-01

    Full Text Available Continuous glucose monitoring (CGM by suitable portable sensors plays a central role in the treatment of diabetes, a disease currently affecting more than 350 million people worldwide. Noninvasive CGM (NI-CGM, in particular, is appealing for reasons related to patient comfort (no needles are used but challenging. NI-CGM prototypes exploiting multisensor approaches have been recently proposed to deal with physiological and environmental disturbances. In these prototypes, signals measured noninvasively (e.g., skin impedance, temperature, optical skin properties, etc. are combined through a static multivariate linear model for estimating glucose levels. In this work, by exploiting a dataset of 45 experimental sessions acquired in diabetic subjects, we show that regularisation-based techniques for the identification of the model, such as the least absolute shrinkage and selection operator (better known as LASSO, Ridge regression, and Elastic-Net regression, improve the accuracy of glucose estimates with respect to techniques, such as partial least squares regression, previously used in the literature. More specifically, the Elastic-Net model (i.e., the model identified using a combination of and norms has the best results, according to the metrics widely accepted in the diabetes community. This model represents an important incremental step toward the development of NI-CGM devices effectively usable by patients.

  2. Accuracy Improvement of ASTER Stereo Satellite Generated DEM Using Texture Filter

    Institute of Scientific and Technical Information of China (English)

    Mandla V. Ravibabu; Kamal Jain; Surendra Pal Singh; Naga Jyothi Meeniga

    2010-01-01

    The grid DEM (digital elevation model) generation can be from any of a number of sources: for instance, analogue to digital conversion of contour maps followed by application of the TIN model, or direct elevation point modelling via digital photogrammetry applied to airborne images or satellite images. Currently, apart from the deployment of point-clouds from LiDAR data acquisition, the generally favoured approach refers to applications of digital photogrammetry. One of the most important steps in such deployment is the stereo matching process for conjugation point (pixel) establishment: very difficult in modelling any homogenous areas like water cover or forest canopied areas due to the lack of distinct spatial features. As a result, application of automated procedures is sure to generate erroneous elevation values. In this paper, we present and apply a method for improving the quality of stereo DEMs generated via utilization of an entropy texture filter. The filter was applied for extraction of homogenous areas before stereo matching so that a statistical texture filter could then be applied for removing anomalous evaluation values prior to interpolation and accuracy assessment via deployment of a spatial correlation technique. For exemplification, we used a stereo pair of ASTER 1B images.

  3. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  4. Next Generation Network Real-Time Kinematic Interpolation Segment to Improve the User Accuracy

    Directory of Open Access Journals (Sweden)

    Mohammed Ouassou

    2015-01-01

    Full Text Available This paper demonstrates that automatic selection of the right interpolation/smoothing method in a GNSS-based network real-time kinematic (NRTK interpolation segment can improve the accuracy of the rover position estimates and also the processing time in the NRTK processing center. The methods discussed and investigated are inverse distance weighting (IDW; bilinear and bicubic spline interpolation; kriging interpolation; thin-plate splines; and numerical approximation methods for spatial processes. The methods are implemented and tested using GNSS data from reference stations in the Norwegian network RTK service called CPOS. Data sets with an average baseline between reference stations of 60–70 km were selected. 12 prediction locations were used to analyze the performance of the interpolation methods by computing and comparing different measures of the goodness of fit such as the root mean square error (RMSE, mean square error, and mean absolute error, and also the computation time was compared. Results of the tests show that ordinary kriging with the Matérn covariance function clearly provides the best results. The thin-plate spline provides the second best results of the methods selected and with the test data used.

  5. Multispectral image compression methods for improvement of both colorimetric and spectral accuracy

    Science.gov (United States)

    Liang, Wei; Zeng, Ping; Xiao, Zhaolin; Xie, Kun

    2016-07-01

    We propose that both colorimetric and spectral distortion in compressed multispectral images can be reduced by a composite model, named OLCP(W)-X (OptimalLeaders_Color clustering-PCA-W weighted-X coding). In the model, first the spectral-colorimetric clustering is designed for sparse equivalent representation by generating spatial basis. Principal component analysis (PCA) is subsequently used in the manipulation of spatial basis for spectral redundancy removal. Then error compensation mechanism is presented to produce predicted difference image, and finally combined with visual characteristic matrix W, and the created image is compressed by traditional multispectral image coding schemes. We introduce four model-based algorithms to explain their validity. The first two algorithms are OLCPWKWS (OLC-PCA-W-KLT-WT-SPIHT) and OLCPKWS, in which Karhunen-Loeve transform, wavelet transform, and set partitioning in hierarchical trees coding are applied for the created image compression. And the latter two methods are OLCPW-JPEG2000-MCT and OLCP-JPEG2000-MCT. Experimental results show that, compared with the corresponding traditional coding, the proposed OLCPW-X schemes can significantly improve the colorimetric accuracy of rebuilding images under various illumination conditions and generally achieve satisfactory peak signal-to-noise ratio under the same compression ratio. And OLCP-X methods could always ensure superior spectrum reconstruction. Furthermore, our model has excellent performance on user interaction.

  6. Accuracy improvement of T-history method for measuring heat of fusion of various materials

    Energy Technology Data Exchange (ETDEWEB)

    Hiki Hong [KyungHee University (Korea). School of Mechanical and Industrial Systems Engineering; Sun Kuk Kim [KyungHee University (Korea). School of Architecture and Civil Engineering; Yong-Shik Kim [University of Incheon (Korea). Dept. of Architectural Engineering

    2004-06-01

    T-history method, developed for measuring heat-of-fusion of phase change material (PCM) in sealed tubes, has the advantages of a simple experimental device and convenience with no sampling process. However, some improper assumptions in the original method, such as using a degree of supercooling as the end of latent heat period and neglecting sensible heat during phase change, can cause significant errors in determining the heat of fusion. We have improved this problem in order to predict better results. The present study shows that the modified T-history method is successfully applied to a variety of PCMs such as paraffin and lauric acid having no or a low degree of supercooling. Also it turned out that selected periods for sensible and latent heat do not significantly affect the accuracy of heat- of-fusion. As a result, the method can provide an appropriate means to assess a newly developed PCM by a cycle test even if a very accurate value cannot be obtained. (author)

  7. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.

    Science.gov (United States)

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-09-19

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images.

  8. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    Science.gov (United States)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  9. Neural network incorporating meal information improves accuracy of short-time prediction of glucose concentration.

    Science.gov (United States)

    Zecchin, Chiara; Facchinetti, Andrea; Sparacino, Giovanni; De Nicolao, Giuseppe; Cobelli, Claudio

    2012-06-01

    Diabetes mellitus is one of the most common chronic diseases, and a clinically important task in its management is the prevention of hypo/hyperglycemic events. This can be achieved by exploiting continuous glucose monitoring (CGM) devices and suitable short-term prediction algorithms able to infer future glycemia in real time. In the literature, several methods for short-time glucose prediction have been proposed, most of which do not exploit information on meals, and use past CGM readings only. In this paper, we propose an algorithm for short-time glucose prediction using past CGM sensor readings and information on carbohydrate intake. The predictor combines a neural network (NN) model and a first-order polynomial extrapolation algorithm, used in parallel to describe, respectively, the nonlinear and the linear components of glucose dynamics. Information on the glucose rate of appearance after a meal is described by a previously published physiological model. The method is assessed on 20 simulated datasets and on 9 real Abbott FreeStyle Navigator datasets, and its performance is successfully compared with that of a recently proposed NN glucose predictor. Results suggest that exploiting meal information improves the accuracy of short-time glucose prediction.

  10. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    Science.gov (United States)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  11. Implementation of a preoperative briefing protocol improves accuracy of teamwork assessment in the operating room.

    Science.gov (United States)

    Paige, John T; Aaron, Deborah L; Yang, Tong; Howell, D Shannon; Hilton, Charles W; Cohn, Isidore; Chauvin, Sheila W

    2008-09-01

    This study examined the effect of implementing a new preoperative briefing protocol on self- and peer-assessments of individual operating room (OR) teamwork behaviors. From July 2006 to February 2007, OR teamwork performance at a rural community hospital was evaluated before and after training and implementation of the protocol. After each case, every member on the team completed a 360-degree type teamwork behavior evaluation containing both self- and peer-assessments using a six-point Likert type scale (1 = definitely no to 6 = definitely yes). Individual behavior change was measured using the mean scale score of pre and postprotocol assessments. Statistical analysis included t test for both pre/post and self/peer differences. Data were available for one general surgeon and nine OR staff (pre = 20 cases, post = 16 cases). The preprotocol self-assessment mean score was significantly higher than peer-assessment (5.63 vs 5.29, P teamwork behaviors. No difference was observed in postassessment mean scores for self- and peer-assessments. Individuals overestimated their teamwork behaviors before protocol implementation. Using a preoperative protocol seems to improve OR staff teamwork behaviors and self-assessment accuracy. The use of a 360-degree assessment method targeting specific, observable behaviors may be useful in evaluating team-based interventions and enhancing teamwork effectiveness.

  12. Multimodal nonlinear optical microscopy improves the accuracy of early diagnosis of squamous intraepithelial neoplasia

    Science.gov (United States)

    Teh, Seng Khoon; Zheng, Wei; Li, Shuxia; Li, Dong; Zeng, Yan; Yang, Yanqi; Qu, Jianan Y.

    2013-03-01

    We explore diagnostic utility of a multicolor excitation multimodal nonlinear optical (NLO) microscopy for noninvasive detection of squamous epithelial precancer in vivo. The 7,12-dimenthylbenz(a)anthracene treated hamster cheek pouch was used as an animal model of carcinogenesis. The NLO microscope system employed was equipped with the ability to collect multiple tissue endogenous NLO signals such as two-photon excited fluorescence of keratin, nicotinamide adenine dinucleotide, collagen, and tryptophan, and second harmonic generation of collagen in spectral and time domains simultaneously. A total of 34 (11 controlled and 23 treated) Golden Syrian hamsters with 62 in vivo spatially distinct measurement sites were assessed in this study. High-resolution label-free NLO images were acquired from stratum corneum, stratum granulosum-stratum basale, and stroma for all tissue measurement sites. A total of nine and eight features from 745 and 600 nm excitation wavelengths, respectively, involving tissue structural and intrinsic biochemical properties were found to contain significant diagnostic information for precancers detection (p<0.05). Particularly, 600 nm excited tryptophan fluorescence signals emanating from stratum corneum was revealed to provide remarkable diagnostic utility. Multivariate statistical techniques confirmed the integration of diagnostically significant features from multicolor excitation wavelengths yielded improved diagnostic accuracy as compared to using the individual wavelength alone.

  13. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    Science.gov (United States)

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  14. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    Science.gov (United States)

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-01

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty.

  15. Evaluation of accuracy and uncertainty of ELISA assays for the determination of interleukin-4, interleukin-5, interferon-gamma and tumor necrosis factor-alpha

    DEFF Research Database (Denmark)

    Borg, Lone; Kristiansen, Jesper; Christensen, Jytte M

    2002-01-01

    . However, models for establishing the traceability and uncertainty of immunoassay results are lacking. Sandwich enzyme-linked immunosorbent assays (ELISAs) were developed for determination of the human cytokines interleukin-4 (IL-4), interleukin-5 (IL-5), interferon-y (IFN-gamma) and tumor necrosis factor......-alpha (TNF-alpha). The accuracy of each of the assays was evaluated in the ranges of 1-15 microg/l (IL-4), 0.001-1 microg/l (IL-5), 0.5-2.5 microg/l (IFN-T) and 0.14-2.2 microg/l (TNF-alpha). Other evaluated performance characteristics were the limit of detection (LOD), immunological specificity......) of the assessed ELISAs was found to be in the range of 11-18%, except for IL-5 where RSDA increased at decreasing concentrations. The LOD was 0.12 microg/l, 0.0077 microg/l, 0.0069 microg/l and 0.0063 microg/l for IL-4, IL-5, IFN-gamma and TNF-alpha, respectively. Traceability to the WHO IS was established...

  16. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    Science.gov (United States)

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  17. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    Science.gov (United States)

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  18. Diagnostic accuracy of two multiplex real-time polymerase chain reaction assays for the diagnosis of meningitis in children in a resource-limited setting

    Science.gov (United States)

    Khumalo, Jermaine; Nicol, Mark; Hardie, Diana; Muloiwa, Rudzani; Mteshana, Phindile

    2017-01-01

    Introduction Accurate etiological diagnosis of meningitis is important, but difficult in resource-limited settings due to prior administration of antibiotics and lack of viral diagnostics. We aimed to develop and validate 2 real-time multiplex PCR (RT-PCR) assays for the detection of common causes of community-acquired bacterial and viral meningitis in South African children. Methods We developed 2 multiplex RT- PCRs for detection of S. pneumoniae, N. meningitidis, H. influenzae, enteroviruses, mumps virus and herpes simplex virus. We tested residual CSF samples from children presenting to a local paediatric hospital over a one-year period, whose CSF showed an abnormal cell count. Results were compared with routine diagnostic tests and the final discharge diagnosis. We calculated accuracy of the bacterial RT-PCR assay compared to CSF culture and using World Health Organisation definitions of laboratory-confirmed bacterial meningitis. Results From 292 samples, bacterial DNA was detected in 12 (4.1%) and viral nucleic acids in 94 (32%). Compared to CSF culture, the sensitivity and specificity of the bacterial RT-PCR was 100% and 97.2% with complete agreement in organism identification. None of the cases positive by viral RT-PCR had a bacterial cause confirmed on CSF culture. Only 9/90 (10%) of patients diagnosed clinically as bacterial meningitis or partially treated bacterial meningitis tested positive with the bacterial RT-PCR. Discussion In this population the use of 2 multiplex RT-PCRs targeting 6 common pathogens gave promising results. If introduced into routine diagnostic testing, these multiplex RT-PCR assays would supplement other diagnostic tests, and have the potential to limit unnecessary antibiotic therapy and hospitalisation. PMID:28346504

  19. Hybrid Indoor-Based WLAN-WSN Localization Scheme for Improving Accuracy Based on Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Zahid Farid

    2016-01-01

    Full Text Available In indoor environments, WiFi (RSS based localization is sensitive to various indoor fading effects and noise during transmission, which are the main causes of localization errors that affect its accuracy. Keeping in view those fading effects, positioning systems based on a single technology are ineffective in performing accurate localization. For this reason, the trend is toward the use of hybrid positioning systems (combination of two or more wireless technologies in indoor/outdoor localization scenarios for getting better position accuracy. This paper presents a hybrid technique to implement indoor localization that adopts fingerprinting approaches in both WiFi and Wireless Sensor Networks (WSNs. This model exploits machine learning, in particular Artificial Natural Network (ANN techniques, for position calculation. The experimental results show that the proposed hybrid system improved the accuracy, reducing the average distance error to 1.05 m by using ANN. Applying Genetic Algorithm (GA based optimization technique did not incur any further improvement to the accuracy. Compared to the performance of GA optimization, the nonoptimized ANN performed better in terms of accuracy, precision, stability, and computational time. The above results show that the proposed hybrid technique is promising for achieving better accuracy in real-world positioning applications.

  20. A knowledge-guided strategy for improving the accuracy of scoring functions in binding affinity prediction

    Directory of Open Access Journals (Sweden)

    Wang Renxiao

    2010-04-01

    Full Text Available Abstract Background Current scoring functions are not very successful in protein-ligand binding affinity prediction albeit their popularity in structure-based drug designs. Here, we propose a general knowledge-guided scoring (KGS strategy to tackle this problem. Our KGS strategy computes the binding constant of a given protein-ligand complex based on the known binding constant of an appropriate reference complex. A good training set that includes a sufficient number of protein-ligand complexes with known binding data needs to be supplied for finding the reference complex. The reference complex is required to share a similar pattern of key protein-ligand interactions to that of the complex of interest. Thus, some uncertain factors in protein-ligand binding may cancel out, resulting in a more accurate prediction of absolute binding constants. Results In our study, an automatic algorithm was developed for summarizing key protein-ligand interactions as a pharmacophore model and identifying the reference complex with a maximal similarity to the query complex. Our KGS strategy was evaluated in combination with two scoring functions (X-Score and PLP on three test sets, containing 112 HIV protease complexes, 44 carbonic anhydrase complexes, and 73 trypsin complexes, respectively. Our results obtained on crystal structures as well as computer-generated docking poses indicated that application of the KGS strategy produced more accurate predictions especially when X-Score or PLP alone did not perform well. Conclusions Compared to other targeted scoring functions, our KGS strategy does not require any re-parameterization or modification on current scoring methods, and its application is not tied to certain systems. The effectiveness of our KGS strategy is in theory proportional to the ever-increasing knowledge of experimental protein-ligand binding data. Our KGS strategy may serve as a more practical remedy for current scoring functions to improve their

  1. Improving accuracy in the MPM method using a null space filter

    Science.gov (United States)

    Gritton, Chris; Berzins, Martin

    2017-01-01

    The material point method (MPM) has been very successful in providing solutions to many challenging problems involving large deformations. Nevertheless there are some important issues that remain to be resolved with regard to its analysis. One key challenge applies to both MPM and particle-in-cell (PIC) methods and arises from the difference between the number of particles and the number of the nodal grid points to which the particles are mapped. This difference between the number of particles and the number of grid points gives rise to a non-trivial null space of the linear operator that maps particle values onto nodal grid point values. In other words, there are non-zero particle values that when mapped to the grid point nodes result in a zero value there. Moreover, when the nodal values at the grid points are mapped back to particles, part of those particle values may be in that same null space. Given positive mapping weights from particles to nodes such null space values are oscillatory in nature. While this problem has been observed almost since the beginning of PIC methods there are still elements of it that are problematical today as well as methods that transcend it. The null space may be viewed as being connected to the ringing instability identified by Brackbill for PIC methods. It will be shown that it is possible to remove these null space values from the solution using a null space filter. This filter improves the accuracy of the MPM methods using an approach that is based upon a local singular value decomposition (SVD) calculation. This local SVD approach is compared against the global SVD approach previously considered by the authors and to a recent MPM method by Zhang and colleagues.

  2. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    Directory of Open Access Journals (Sweden)

    Kyuman Lee

    2016-08-01

    Full Text Available The airborne relay-based positioning system (ARPS, which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference

  3. IMPROVING THE POSITIONING ACCURACY OF TRAIN ON THE APPROACH SECTION TO THE RAILWAY CROSSING

    Directory of Open Access Journals (Sweden)

    V. I. Havryliuk

    2016-02-01

    Full Text Available Purpose. In the paper it is necessary to analyze possibility of improving the positioning accuracy of train on the approach section to crossing for traffic safety control at railway crossings. Methodology. Researches were performed using developed mathematical model, describing dependence of the input impedance of the coded and audio frequency track circuits on a train coordinate at various values of ballast isolation resistances and for all usable frequencies. Findings. The paper presents the developed mathematical model, describing dependence of the input impedance of the coded and audio-frequency track circuits on the train coordinate at various values of ballast isolation resistances and for all frequencies used in track circuits. The relative error determination of train coordinate by input impedance caused by variation of the ballast isolation resistance for the coded track circuits was investigated. The values of relative error determination of train coordinate can achieve up to 40-50 % and these facts do not allow using this method directly for coded track circuits. For short audio frequency track circuits on frequencies of continuous cab signaling (25, 50 Hz the relative error does not exceed acceptable values, this allow using the examined method for determination of train location on the approach section to railway crossing. Originality. The developed mathematical model allowed determination of the error dependence of train coordinate by using input impedance of the track circuit for coded and audio-frequency track circuits at various frequencies of the signal current and at different ballast isolation resistances. Practical value. The authors proposethe method for train location determination on approach section to the crossing, equipped with audio-frequency track circuits, which is a combination of discrete and continuous monitoring of the train location.

  4. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA

    Energy Technology Data Exchange (ETDEWEB)

    Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)

    2015-08-15

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  5. Improvement of brain segmentation accuracy by optimizing non-uniformity correction using N3.

    Science.gov (United States)

    Zheng, Weili; Chee, Michael W L; Zagorodnov, Vitali

    2009-10-15

    Smoothly varying and multiplicative intensity variations within MR images that are artifactual, can reduce the accuracy of automated brain segmentation. Fortunately, these can be corrected. Among existing correction approaches, the nonparametric non-uniformity intensity normalization method N3 (Sled, J.G., Zijdenbos, A.P., Evans, A.C., 1998. Nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Trans. Med. Imag. 17, 87-97.) is one of the most frequently used. However, at least one recent study (Boyes, R.G., Gunter, J.L., Frost, C., Janke, A.L., Yeatman, T., Hill, D.L.G., Bernstein, M.A., Thompson, P.M., Weiner, M.W., Schuff, N., Alexander, G.E., Killiany, R.J., DeCarli, C., Jack, C.R., Fox, N.C., 2008. Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils. NeuroImage 39, 1752-1762.) suggests that its performance on 3 T scanners with multichannel phased-array receiver coils can be improved by optimizing a parameter that controls the smoothness of the estimated bias field. The present study not only confirms this finding, but additionally demonstrates the benefit of reducing the relevant parameter values to 30-50 mm (default value is 200 mm), on white matter surface estimation as well as the measurement of cortical and subcortical structures using FreeSurfer (Martinos Imaging Centre, Boston, MA). This finding can help enhance precision in studies where estimation of cerebral cortex thickness is critical for making inferences.

  6. Alaska Case Study: Scientists Venturing Into Field with Journalists Improves Accuracy

    Science.gov (United States)

    Ekwurzel, B.; Detjen, J.; Hayes, R.; Nurnberger, L.; Pavangadkar, A.; Poulson, D.

    2008-12-01

    Issues such as climate change, stem cell research, public health vaccination, etc., can be fraught with public misunderstanding, myths, as well as deliberate distortions of the fundamental science. Journalists are adept at creating print, radio, and video content that can be both compelling and informative to the public. Yet most scientists have little time or training to devote to developing media content for the public and spend little time with journalists who cover science stories. We conducted a case study to examine whether the time and funding invested in exposing journalists to scientists in the field over several days would improve accuracy of media stories about complex scientific topics. Twelve journalists were selected from the 70 who applied for a four-day environmental journalism fellowship in Alaska. The final group achieved the goal of a broad geographic spectrum of the media outlets (small regional to large national organizations), medium (print, radio, online), and experience (early career to senior producers). Reporters met with a diverse group of scientists. The lessons learned and successful techniques will be presented. Initial results demonstrate that stories were highly accurate and rich with audio or visual content for lay audiences. The journalists have also maintained contact with the scientists, asking for leads on emerging stories and seeking new experts that can assist in their reporting. Science-based institutions should devote more funding to foster direct journalist-scientist interactions in the lab and field. These positive goals can be achieved: (1) more accurate dissemination of science information to the public; (2) a broader portion of the scientific community will become a resource to journalists instead of the same eloquent few in the community; (3) scientists will appreciate the skill and pressures of those who survive the media downsizing and provide media savvy content; and (4) the public may incorporate science evidence

  7. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Science.gov (United States)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  8. Diagnostic Accuracy of a New d-Dimer Assay (Sclavo Auto d-Dimer) for Exclusion of Deep Vein Thrombosis in Symptomatic Outpatients.

    Science.gov (United States)

    Legnani, Cristina; Cini, Michela; Frascaro, Mirella; Rodorigo, Giuseppina; Sartori, Michelangelo; Cosmi, Benilde

    2017-04-01

    In patients presenting non-high clinical pretest probability (PTP), a negative d-dimer can exclude venous thromboembolism without imaging tests. However, each d-dimer assay should be validated in prospective studies. We evaluated an automated d-dimer immunoassay using the Sclavo Auto d-dimer (Sclavo Diagnostics Int, Sovicille, Italy) provided by Dasit Diagnostica (Cornaredo, Milan, Italy). Three hundred two consecutive outpatients suspected of leg deep vein thrombosis (DVT) with non-high PTP were included. The Sclavo Auto d-dimer assay was evaluated on 2 analyzers (Sysmex CA-7000 and Sysmex CS-2100; Sysmex Corporation, Kobe, Japan, provided by Dasit). The cutoff value (200 ng/mL) was established a priori. Prevalence of DVT was 11.9%. Since no false-negative patients were detected, the sensitivity and negative predictive values (NPVs) were 100% (sensitivity = CA-7000: 100% [95% confidence interval, CI: 93.3-100], CS-2100: 100% [95% CI: 93.3-100]; NPV = CA-7000: 100% [95% CI: 97.9-100], CS-2100: 100% [95% CI: 98.0-100]). Specificity was 65.4% (95% CI: 59.4-71.1) and 69.2% (95% CI: 63.3-74.7) for CA-7000 and CS-2100, respectively. Specificity increased when a higher cutoff value (234 ng/mL) was used for patients aged ≥60 years without compromising the safety. Assay reproducibility was satisfactory at concentrations near the cutoff value (total coefficient of variations d-dimer assay was accurate when used for DVT diagnostic workup in outpatients with non-high PTP. Based on its high sensitivity and NPV, it can be used as a stand-alone test in outpatients with non-high PTP. Given its high specificity, the number of patients in whom further imaging techniques can be avoided increased, improving the yield of the test.

  9. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran A Monaghan

    Full Text Available Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data

  10. Diagnostic accuracy of IgA anti-tissue transglutaminase antibody assays in celiac disease patients with selective IgA deficiency.

    Science.gov (United States)

    Villalta, D; Alessio, M G; Tampoia, M; Tonutti, E; Brusca, I; Bagnasco, M; Pesce, G; Bizzaro, N

    2007-08-01

    Clinical studies have estimated a 10- to 20-fold increased risk for celiac disease (CD) in patients with selective IgA deficiency (SIgAD). For this reason, screening for CD is mandatory in SIgAD patients, but it represents a special challenge since the specific IgA class antibodies against gliadin (AGA), endomysium (EMA), and tissue-transglutaminase (tTG) are not produced in patients with CD. IgG class counterparts of these antibodies may be informative; in particular IgG EMA has been demonstrated to be a valid marker for diagnosing CD in SIgAD cases, but it is not used much in clinical laboratories, because it is cumbersome and involves some technical difficulties. Even if it was widely used in clinical laboratories, the measuring of IgG AGA has shown a less-than-optimum diagnostic accuracy, so that now it tends to be substituted by tests for anti-tTG IgG, for which the few available studies have shown diagnostic performances superior to AGA. Since it is not known whether various available methods for measuring IgG anti-tTG antibodies offer similar diagnostic performances, we have compared the results obtained from nine second-generation commercial methods (D-tek, Phadia, Immco, Orgentec, Radim, Euroimmun, Inova, Aesku, Generic Assays), measuring IgG anti-tTG antibodies in 20 patients with CD and SIgAD and in 113 controls (9 patients with SIgAD without CD, 54 patients with chronic liver disease, and 50 healthy individuals). Diagnostic sensitivity, calculated by means of ROC plot analysis, ranged between 75% and 95%, and specificity ranged from 94% to 100%. In the same population, the diagnostic sensitivity and specificity of AGA IgG were 40% and 87%, respectively. Even though they perform differently, all IgG anti-tTG methods evaluated are reliable serological assays for the diagnosis of CD in SIgAD patients, with diagnostic accuracy superior to the AGA IgG method. The methods that use a mix of tTG and gliadin peptides as the antigenic preparation have a

  11. Why Does Rereading Improve Metacomprehension Accuracy? Evaluating the Levels-of-Disruption Hypothesis for the Rereading Effect

    Science.gov (United States)

    Dunlosky, John; Rawson, Katherine A.

    2005-01-01

    Rereading can improve the accuracy of people's predictions of future test performance for text material. This research investigated this rereading effect by evaluating 2 predictions from the levels-of-disruption hypothesis: (a) The rereading effect will occur when the criterion test measures comprehension of the text, and (b) the rereading effect…

  12. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    Science.gov (United States)

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  13. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  14. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  15. Two-orders of magnitude improvement detection limit of lateral flow assays using isotachophoresis

    CERN Document Server

    Moghadam, Babak Y; Posner, Jonathan D

    2014-01-01

    Lateral flow (LF) immunoassays are one of the most prevalent point-of-care (POC) diagnostics due to their simplicity, low cost, and robust operation. A common criticism of LF tests is that they have poor detection limits compared to analytical techniques, like ELISA, which confines their application as a diagnostic tool. The low detection limit of LF assays and associated long equilibration times is due to kinetically limited surface reactions that result from low target concentrations. Here we use isotachophoresis (ITP), a powerful electrokinetic preconcentration and separation technique, to focus target analytes into a thin band and transport them to the LF capture line resulting is a dramatic increase in the surface reaction rate and equilibrium binding. We show that ITP is able to improve limit of detection (LOD) of LF assays by 400-fold for 90 second assay time and by 160-fold for a longer 5 minutes time scale. ITP-enhanced LF (ITP-LF) also shows up to 30% target extraction from 100 uL of the sample, whi...

  16. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    Directory of Open Access Journals (Sweden)

    Weizheng Yuan

    2012-02-01

    Full Text Available This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  17. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    Science.gov (United States)

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model. PMID:22438734

  18. Information transmission via movement behaviour improves decision accuracy in human groups

    NARCIS (Netherlands)

    Clément, R.J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, R.H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate informatio

  19. Information transmission via movement behaviour improves decision accuracy in human groups

    NARCIS (Netherlands)

    Clément, Romain J.G.; Wolf, Max; Snijders, Lysanne; Krause, Jens; Kurvers, Ralf H.J.M.

    2015-01-01

    A major advantage of group living is increased decision accuracy. In animal groups information is often transmitted via movement. For example, an individual quickly moving away from its group may indicate approaching predators. However, individuals also make mistakes which can initiate information c

  20. Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text

    Science.gov (United States)

    Reid, Alan J.; Morrison, Gary R.; Bol, Linda

    2017-01-01

    This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…

  1. Application of Mensuration Technology to Improve the Accuracy of Field Artillery Firing Unit Location

    Science.gov (United States)

    2013-12-13

    the Geoid ” by Witold Fraczek suggests errors in GPS accuracy are due to the sea level measurements used as a constant in GPS time calculations......GEOID96 Geoid Height Model.” National Geodetic Survey, National Oceanic and Atmospheric Administration. (http://www.ngs.noaa.gov/PUBS_ LIB/gislis96

  2. Improving Accuracy of River Flow Forecasting Using LSSVR with Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Rana Muhammad Adnan

    2017-01-01

    Full Text Available River flow prediction is essential in many applications of water resources planning and management. In this paper, the accuracy of multivariate adaptive regression splines (MARS, model 5 regression tree (M5RT, and conventional multiple linear regression (CMLR is compared with a hybrid least square support vector regression-gravitational search algorithm (HLGSA in predicting monthly river flows. In the first part of the study, all three regression methods were compared with each other in predicting river flows of each basin. It was found that the HLGSA method performed better than the MARS, M5RT, and CMLR in river flow prediction. The effect of log transformation on prediction accuracy of the regression methods was also examined in the second part of the study. Log transformation of the river flow data significantly increased the prediction accuracy of all regression methods. It was also found that log HLGSA (LHLSGA performed better than the other regression methods. In the third part of the study, the accuracy of the LHLGSA and HLGSA methods was examined in river flow estimation using nearby river flow data. On the basis of results of all applications, it was found that LHLGSA and HLGSA could be successfully used in prediction and estimation of river flow.

  3. An Algorithm for Improving the Accuracy of Systems Measuring Parameters of Moving Objects

    Directory of Open Access Journals (Sweden)

    Dichev Dimitar

    2016-12-01

    Full Text Available The paper considers an algorithm for increasing the accuracy of measuring systems operating on moving objects. The algorithm is based on the Kalman filter. It aims to provide a high measurement accuracy for the whole range of change of the measured quantity and the interference effects, as well as to eliminate the influence of a number of interference sources, each of which is of secondary importance but their total impact can cause a considerable distortion of the measuring signal. The algorithm is intended for gyro-free measuring systems. It is based on a model of the moving object dynamics. The mathematical model is developed in such a way that it enables to automatically adjust the algorithm parameters depending on the current state of measurement conditions. This makes possible to develop low-cost measuring systems with a high dynamic accuracy. The presented experimental results prove effectiveness of the proposed algorithm in terms of the dynamic accuracy of measuring systems of that type.

  4. Colony color assay coupled with 5FOA negative selection greatly improves yeast threehybrid library screening efficiency

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The recently developed yeast three-hybrid system is a powerful tool for analyzing RNA-protein interactions in vivo. However, large numbers of false positives are frequently met due to bait RNA-independent activation of the reporter gene in the library screening using this system. In this report, we coupled the colony color assay with the 5FOA (5-fluoroorotic acid) negative selection in the library screening, and found that this coupled method effectively eliminated bait RNA-independent false positives and hence greatly improved library screening efficiency. We used this method successfully in isolation of cDNA of an RNA-binding protein that might play important roles in certain cellular process. This improvement will facilitate the use of the yeast three-hybrid system in analyzing RNA-protein interaction.

  5. COARSE-MESH-ACCURACY IMPROVEMENT OF BILINEAR Q4-PLANE ELEMENT BY THE COMBINED HYBRID FINITE ELEMENT METHOD

    Institute of Scientific and Technical Information of China (English)

    谢小平; 周天孝

    2003-01-01

    The combined hybrid finite element method is of an intrinsic mechanism of enhancing coarse-mesh-accuracy of lower order displacement schemes. It was confirmed that the combined hybrid scheme without energy error leads to enhancement of accuracy at coarse meshes, and that the combination parameter plays an important role in the enhancement. As an improvement of conforming bilinear Q4-plane element, the combined hybrid method adopted the most convenient quadrilateral displacements-stress mode, i. e.,the mode of compatible isoparametric bilinear displacements and pure constant stresses. By adjusting the combined parameter, the optimized version of the combined hybrid element was obtained and numerical tests indicated that this parameter-adjusted version behaves much better than Q4-element and is of high accuracy at coarse meshes. Due to elimination of stress parameters at the elemental level, this combined hybrid version is of the same computational cost as that of Q4 -element.

  6. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  7. Power outage estimation for tropical cyclones: improved accuracy with simpler models.

    Science.gov (United States)

    Nateghi, Roshanak; Guikema, Seth; Quiring, Steven M

    2014-06-01

    In this article, we discuss an outage-forecasting model that we have developed. This model uses very few input variables to estimate hurricane-induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.

  8. Ways to help Chinese Students in Senior High School improve language accuracy in writing

    Institute of Scientific and Technical Information of China (English)

    潘惠红

    2015-01-01

    <正>Introduction In Chinese ELT(English language teaching),as in other countries,both fluency and accuracy are considered important either in the teaching or assessment of writing.In this respect,the last decade has seen reforms in the College Entrance Examination in Guangdong Province.With two writing tasks being set as assessment,task one requires students to summarise Chinese language information into five English sentences while the

  9. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections

    Science.gov (United States)

    Strymish, Judith; Gupta, Kalpana

    2015-01-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P < 0.001). Genitourinary tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. PMID:26416859

  10. The effect of written corrective feedback on grammatical accuracy of EFL students: An improvement over previous unfocused designs

    Directory of Open Access Journals (Sweden)

    Mobin Khanlarzadeh

    2016-07-01

    Full Text Available The effectiveness of written corrective feedback (WCF in the improvement of language learners' grammatical accuracy has been a topic of interest in SLA studies for the past couple of decades. The present study reports the findings of a three-month study investigating the effect of direct unfocused WCF on the grammatical accuracy of elementary students in an EFL context. The researchers selected two intact classes totaling 33 students, and assigned each to a direct feedback group (n = 16 and a control group (n = 17. The students produced eight pieces of writing (a pretest, three writing tasks along with their revisions, and a posttest from which their grammatical accuracy was obtained. The results indicated that while the experimental group significantly outperformed the control group in the revision of the three writing tasks, no significant difference was found when the two groups produced a new piece of writing after a one-month interval. The study concludes that accuracy improvement caused by unfocused WCF during the revision process does not extend to EFL learners' future writing when no feedback is available, at least at the elementary level.

  11. An improved assay for the determination of Huntington`s disease allele size

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, C.; Klinger, K.; Miller, G. [Intergrated Genetics, Framingham, MA (United States)

    1994-09-01

    The hallmark of Huntington`s disease (HD) is the expansion of a polymorphic (CAG)n repeat. Several methods have been published describing PCR amplification of this region. Most of these assays require a complex PCR reaction mixture to amplify this GC-rich region. A consistent problem with trinucleotide repeat PCR amplification is the presence of a number of {open_quotes}stutter bands{close_quotes} which may be caused by primer or amplicon slippage during amplification or insufficient polymerase processivity. Most assays for HD arbitrarily select a particular band for diagnostic purposes. Without a clear choice for band selection such an arbitrary selection may result in inconsistent intra- or inter-laboratory findings. We present an improved protocol for the amplification of the HD trinucleotide repeat region. This method simplifies the PCR reaction buffer and results in a set of easily identifiable bands from which to determine allele size. HD alleles were identified by selecting bands of clearly greater signal intensity. Stutter banding was much reduced thus permitting easy identification of the most relevant PCR product. A second set of primers internal to the CCG polymorphism was used in selected samples to confirm allele size. The mechanism of action of N,N,N trimethylglycine in the PCR reaction is not clear. It may be possible that the minimal isostabilizing effect of N,N,N trimethylglycine at 2.5 M is significant enough to affect primer specificity. The use of N,N,N trimethylglycine in the PCR reaction facilitated identification of HD alleles and may be appropriate for use in other assays of this type.

  12. USE SATELLITE IMAGES AND IMPROVE THE ACCURACY OF HYPERSPECTRAL IMAGE WITH THE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Javadi

    2015-12-01

    Full Text Available The best technique to extract information from remotely sensed image is classification. The problem of traditional classification methods is that each pixel is assigned to a single class by presuming all pixels within the image. Mixed pixel classification or spectral unmixing, is a process that extracts the proportions of the pure components of each mixed pixel. This approach is called spectral unmixing. Hyper spectral images have higher spectral resolution than multispectral images. In this paper, pixel-based classification methods such as the spectral angle mapper, maximum likelihood classification and subpixel classification method (linear spectral unmixing were implemented on the AVIRIS hyper spectral images. Then, pixel-based and subpixel based classification algorithms were compared. Also, the capabilities and advantages of spectral linear unmixing method were investigated. The spectral unmixing method that implemented here is an effective technique for classifying a hyperspectral image giving the classification accuracy about 89%. The results of classification when applying on the original images are not good because some of the hyperspectral image bands are subject to absorption and they contain only little signal. So it is necessary to prepare the data at the beginning of the process. The bands can be stored according to their variance. In bands with a high variance, we can distinguish the features from each other in a better mode in order to increase the accuracy of classification. Also, applying the MNF transformation on the hyperspectral images increase the individual classes accuracy of pixel based classification methods as well as unmixing method about 20 percent and 9 percent respectively.

  13. Simulated single-cycle kinetics improves the design of surface plasmon resonance assays.

    Science.gov (United States)

    Palau, William; Di Primo, Carmelo

    2013-09-30

    Instruments based on the surface plasmon resonance (SPR) principle are widely used to monitor in real time molecular interactions between a partner, immobilized on a sensor chip surface and another one injected in a continuous flow of buffer. In a classical SPR experiment, several cycles of binding and regeneration of the surface are performed in order to determine the rate and the equilibrium constants of the reaction. In 2006, Karlsson and co-workers introduced a new method named single-cycle kinetics (SCK) to perform SPR assays. The method consists in injecting sequentially increasing concentrations of the partner in solution, with only one regeneration step performed at the end of the complete binding cycle. A 10 base-pair DNA duplex was characterized kinetically to show how simulated sensorgrams generated by the BiaEvaluation software provided by Biacore™ could really improve the design of SPR assays performed with the SCK method. The DNA duplex was investigated at three temperatures, 10, 20 and 30 °C, to analyze fast and slow rate constants. The results show that after a short obligatory preliminary experiment, simulations provide users with the best experimental conditions to be used, in particular, the maximum concentration used to reach saturation, the dilution factor for the serial dilutions of the sample injected and the duration of the dissociation and association phases. The use of simulated single-cycle kinetics saves time and reduces sample consumption. Simulations can also be used to design SPR experiments with ternary complexes.

  14. An improved method for utilization of peptide substrates for antibody characterization and enzymatic assays.

    Science.gov (United States)

    Ghosh, Inca; Sun, Luo; Evans, Thomas C; Xu, Ming-Qun

    2004-10-01

    Synthetic peptides have become an important tool in antibody production and enzyme characterization. The small size of peptides, however, has hindered their use in assays systems, such as Western blots, and as immunogens. Here, we present a facile method to improve the properties of peptides for multiple applications by ligating the peptides to intein-generated carrier proteins. The stoichiometric ligation of peptide and carrier achieved by intein-mediated protein ligation (IPL) results in the ligation product migrating as a single band on a SDS-PAGE gel. The carrier proteins, HhaI methylase (M.HhaI) and maltose-binding protein (MBP), were ligated to various peptides; the ligated carrier-peptide products gave sharp, reproducible bands when used as positive controls for antibodies raised against the same peptides during Western blot analysis. We further show that ligation of the peptide antigens to a different thioester-tagged carrier protein, paramyosin, produced immunogens for the production of antisera in rabbits or mice. Furthermore, we demonstrate the generation of a substrate for enzymatic assays by ligating a peptide containing the phosphorylation site for Abl protein tyrosine kinase to a carrier protein. This carrier-peptide protein was used as a kinase substrate that could easily be tested for phosphorylation using a phosphotyrosine antibody in Western blot analysis. These techniques do not require sophisticated equipment, reagents, or skills thereby providing a simple method for research and development.

  15. Using expected sequence features to improve basecalling accuracy of amplicon pyrosequencing data

    DEFF Research Database (Denmark)

    Rask, Thomas Salhøj; Petersen, Bent; Chen, Donald S.

    2016-01-01

    insertions and deletions, are on the other hand likely to disrupt open reading frames. Such an inverse relationship between errors and expectation based on prior knowledge can be used advantageously to guide the process known as basecalling, i.e. the inference of nucleotide sequence from raw sequencing data...... family, where Multipass generates 20 % more error-free sequences than current state of the art methods, and provides sequence characteristics that allow generation of a set of high confidence error-free sequences. This novel method can be used to increase accuracy of existing and future amplicon...

  16. Evaluation of an improved orthognathic articulator system: 1. Accuracy of cast orientation.

    Science.gov (United States)

    Paul, P E; Barbenel, J C; Walker, F S; Khambay, B S; Moos, K F; Ayoub, A F

    2012-02-01

    A systematic study was carried out using plastic model skulls to quantify the accuracy of the transfer of face bow registration to the articulator. A standard Dentatus semi-adjustable articulator system was compared to a purpose built orthognathic articulator system by measuring the maxillary occlusal plane angles of plastic model skulls and of dental casts mounted on the two different types of articulators. There was a statistically significant difference between the two systems; the orthognathic system showed small random errors, but the standard system showed systematic errors of up to 28°.

  17. Use of the Isabel Decision Support System to Improve Diagnostic Accuracy of Pediatric Nurse Practitioner and Family Nurse Practitioner Students

    OpenAIRE

    John, Rita Marie; Hall, Elizabeth; Bakken, Suzanne

    2012-01-01

    Patient safety is a priority for healthcare today. Despite a large proportion of malpractice claims the result of diagnostic error, the use of diagnostic decision support to improve diagnostic accuracy has not been widely used among healthcare professionals. Moreover, while the use of diagnostic decision support has been studied in attending physicians, residents, medical students and advanced practice nurses, the use of decision support among Advanced Practice Nurse (APN) students has not be...

  18. Improvements are needed in reporting of accuracy studies for diagnostic tests used for detection of finfish pathogens.

    Science.gov (United States)

    Gardner, Ian A; Burnley, Timothy; Caraguel, Charles

    2014-12-01

    Indices of test accuracy, such as diagnostic sensitivity and specificity, are important considerations in test selection for a defined purpose (e.g., screening or confirmation) and affect the interpretation of test results. Many biomedical journals recommend that authors clearly and transparently report test accuracy studies following the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines ( www.stard-statement.org ). This allows readers to evaluate overall study validity and assess potential bias in diagnostic sensitivity and specificity estimates. The purpose of the present study was to evaluate the reporting quality of studies evaluating test accuracy for finfish diseases using the 25 items in the STARD checklist. Based on a database search, 11 studies that included estimates of diagnostic accuracy were identified for independent evaluation by three reviewers. For each study, STARD checklist items were scored as "yes," "no," or "not applicable." Only 10 of the 25 items were consistently reported in most (≥80%) papers, and reporting of the other items was highly variable (mostly between 30% and 60%). Three items ("number, training, and expertise of readers and testers"; "time interval between index tests and reference standard"; and "handling of indeterminate results, missing data, and outliers of the index tests") were reported in less than 10% of papers. Two items ("time interval between index tests and reference standard" and "adverse effects from testing") were considered minimally relevant to fish health because test samples usually are collected postmortem. Modification of STARD to fit finfish studies should increase use by authors and thereby improve the overall reporting quality regardless of how the study was designed. Furthermore, the use of STARD may lead to the improved design of future studies.

  19. Quantification of terrestrial laser scanner (TLS) elevation accuracy in oil palm plantation for IFSAR improvement

    Science.gov (United States)

    Muhadi, N. A.; Abdullah, A. F.; Kassim, M. S. M.

    2016-06-01

    In order to ensure the oil palm productivity is high, plantation site should be chosen wisely. Slope is one of the essential factors that need to be taken into consideration when doing a site selection. High quality of plantation area map with elevation information is needed for decision-making especially when dealing with hilly and steep area. Therefore, accurate digital elevation models (DEMs) are required. This research aims to increase the accuracy of Interferometric Synthetic Aperture Radar (IFSAR) by integrating Terrestrial Laser Scanner (TLS) to generate DEMs. However, the focus of this paper is to evaluate the z-value accuracy of TLS data and Real-Time Kinematic GPS (RTK-GPS) as a reference. Besides, this paper studied the importance of filtering process in developing an accurate DEMs. From this study, it has been concluded that the differences of z-values between TLS and IFSAR were small if the points were located on route and when TLS data has been filtered. This paper also concludes that laser scanner (TLS) should be set up on the route to reduce elevation error.

  20. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    Science.gov (United States)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  1. PCA3 and PCA3-Based Nomograms Improve Diagnostic Accuracy in Patients Undergoing First Prostate Biopsy

    Directory of Open Access Journals (Sweden)

    Virginie Vlaeminck-Guillem

    2013-08-01

    Full Text Available While now recognized as an aid to predict repeat prostate biopsy outcome, the urinary PCA3 (prostate cancer gene 3 test has also been recently advocated to predict initial biopsy results. The objective is to evaluate the performance of the PCA3 test in predicting results of initial prostate biopsies and to determine whether its incorporation into specific nomograms reinforces its diagnostic value. A prospective study included 601 consecutive patients addressed for initial prostate biopsy. The PCA3 test was performed before ≥12-core initial prostate biopsy, along with standard risk factor assessment. Diagnostic performance of the PCA3 test was evaluated. The three available nomograms (Hansen’s and Chun’s nomograms, as well as the updated Prostate Cancer Prevention Trial risk calculator; PCPT were applied to the cohort, and their predictive accuracies were assessed in terms of biopsy outcome: the presence of any prostate cancer (PCa and high-grade prostate cancer (HGPCa. The PCA3 score provided significant predictive accuracy. While the PCPT risk calculator appeared less accurate; both Chun’s and Hansen’s nomograms provided good calibration and high net benefit on decision curve analyses. When applying nomogram-derived PCa probability thresholds ≤30%, ≤6% of HGPCa would have been missed, while avoiding up to 48% of unnecessary biopsies. The urinary PCA3 test and PCA3-incorporating nomograms can be considered as reliable tools to aid in the initial biopsy decision.

  2. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    Science.gov (United States)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  3. Improvement of registration accuracy of a handheld augmented reality system for urban landscape simulation

    Directory of Open Access Journals (Sweden)

    Tomohiro Fukuda

    2014-12-01

    Full Text Available The need for visual landscape assessment in large-scale projects for the evaluation of the effects of a particular project on the surrounding landscape has grown in recent years. Augmented reality (AR has been considered for use as a landscape simulation system in which a landscape assessment object created by 3D models is included in the present surroundings. With the use of this system, the time and the cost needed to perform a 3DCG modeling of present surroundings, which is a major issue in virtual reality, are drastically reduced. This research presents the development of a 3D map-oriented handheld AR system that achieves geometric consistency using a 3D map to obtain position data instead of GPS, which has low position information accuracy, particularly in urban areas. The new system also features a gyroscope sensor to obtain posture data and a video camera to capture live video of the present surroundings. All these components are mounted in a smartphone and can be used for urban landscape assessment. Registration accuracy is evaluated to simulate an urban landscape from a short- to a long-range scale. The latter involves a distance of approximately 2000 m. The developed AR system enables users to simulate a landscape from multiple and long-distance viewpoints simultaneously and to walk around the viewpoint fields using only a smartphone. This result is the tolerance level of landscape assessment. In conclusion, the proposed method is evaluated as feasible and effective.

  4. Development of an Automated Bone Mineral Density Software Application: Facilitation Radiologic Reporting and Improvement of Accuracy.

    Science.gov (United States)

    Tsai, I-Ta; Tsai, Meng-Yuan; Wu, Ming-Ting; Chen, Clement Kuen-Huang

    2016-06-01

    The conventional method of bone mineral density (BMD) report production by dictation and transcription is time consuming and prone to error. We developed an automated BMD reporting system based on the raw data from a dual energy X-ray absorptiometry (DXA) scanner for facilitating the report generation. The automated BMD reporting system, a web application, digests the DXA's raw data and automatically generates preliminary reports. In Jan. 2014, 500 examinations were randomized into an automatic group (AG) and a manual group (MG), and the speed of report generation was compared. For evaluation of the accuracy and analysis of errors, 5120 examinations during Jan. 2013 and Dec. 2013 were enrolled retrospectively, and the context of automatically generated reports (AR) was compared with the formal manual reports (MR). The average time spent for report generation in AG and in MG was 264 and 1452 s, respectively (p Z scores in AR is 100 %. The overall accuracy of AR and MR is 98.8 and 93.7 %, respectively (p < 0.001). The mis-categorization rate in AR and MR is 0.039 and 0.273 %, respectively (p = 0.0013). Errors occurred in AR and can be grouped into key-in errors by technicians and need for additional judgements. We constructed an efficient and reliable automated BMD reporting system. It facilitates current clinical service and potentially prevents human errors from technicians, transcriptionists, and radiologists.

  5. Use of the isabel decision support system to improve diagnostic accuracy of pediatric nurse practitioner and family nurse practitioner students.

    Science.gov (United States)

    John, Rita Marie; Hall, Elizabeth; Bakken, Suzanne

    2012-01-01

    Patient safety is a priority for healthcare today. Despite a large proportion of malpractice claims the result of diagnostic error, the use of diagnostic decision support to improve diagnostic accuracy has not been widely used among healthcare professionals. Moreover, while the use of diagnostic decision support has been studied in attending physicians, residents, medical students and advanced practice nurses, the use of decision support among Advanced Practice Nurse (APN) students has not been studied. The authors have implemented the Isabel diagnostic decision support system into the curriculum and are evaluating its impact. The goals of the evaluation study are to describe the diagnostic accuracy and self-reported confidence levels of Pediatric Nurse Practitioner (PNP) and Family Nurse Practitioner (FNP) students over the course of their programs, to examine changes in diagnostic accuracy and self-reported confidence levels over the study period, and to evaluate differences between FNP and PNP students in diagnostic accuracy and self-reported confidence levels for pediatric cases. This paper summarizes establishment of the academic/industry collaboration, case generation, integration of Isabel into the curriculum, and evaluation design.

  6. High-accuracy extrapolated ab initio thermochemistry. II. Minor improvements to the protocol and a vital simplification

    Science.gov (United States)

    Bomble, Yannick J.; Vázquez, Juana; Kállay, Mihály; Michauk, Christine; Szalay, Péter G.; Császár, Attila G.; Gauss, Jürgen; Stanton, John F.

    2006-08-01

    The recently developed high-accuracy extrapolated ab initio thermochemistry method for theoretical thermochemistry, which is intimately related to other high-precision protocols such as the Weizmann-3 and focal-point approaches, is revisited. Some minor improvements in theoretical rigor are introduced which do not lead to any significant additional computational overhead, but are shown to have a negligible overall effect on the accuracy. In addition, the method is extended to completely treat electron correlation effects up to pentuple excitations. The use of an approximate treatment of quadruple and pentuple excitations is suggested; the former as a pragmatic approximation for standard cases and the latter when extremely high accuracy is required. For a test suite of molecules that have rather precisely known enthalpies of formation {as taken from the active thermochemical tables of Ruscic and co-workers [Lecture Notes in Computer Science, edited by M. Parashar (Springer, Berlin, 2002), Vol. 2536, pp. 25-38; J. Phys. Chem. A 108, 9979 (2004)]}, the largest deviations between theory and experiment are 0.52, -0.70, and 0.51kJmol-1 for the latter three methods, respectively. Some perspective is provided on this level of accuracy, and sources of remaining systematic deficiencies in the approaches are discussed.

  7. [Improvement of sensitivity in the second generation HCV core antigen assay by a novel concentration method using polyethylene glycol (PEG)].

    Science.gov (United States)

    Higashimoto, Makiko; Takahashi, Masahiko; Jokyu, Ritsuko; Syundou, Hiromi; Saito, Hidetsugu

    2007-11-01

    A HCV core antigen (Ag) detection assay system, Lumipulse Ortho HCV Ag has been developed and is commercially available in Japan with a lower detection level limit of 50 fmol/l, which is equivalent to 20 KIU/ml in PCR quantitative assay. HCV core Ag assay has an advantage of broader dynamic range compared with PCR assay, however the sensitivity is lower than PCR. We developed a novel HCV core Ag concentration method using polyethylene glycol (PEG), which can improve the sensitivity five times better than the original assay. The reproducibility was examined by consecutive five-time measurement of HCV patients serum, in which the results of HCV core Ag original and concentrated method were 56.8 +/- 8.1 fmol/l (mean +/- SD), CV 14.2% and 322.9 +/- 45.5 fmol/l CV 14.0%, respectively. The assay results of HCV negative samples in original HCV core Ag were all 0.1 fmol/l and the results were same even in the concentration method. The results of concentration method were 5.7 times higher than original assay, which was almost equal to theoretical rate as expected. The assay results of serially diluted samples were also as same as expected data in both original and concentration assay. We confirmed that the sensitivity of HCV core Ag concentration method had almost as same sensitivity as PCR high range assay in the competitive assay study using the serially monitored samples of five HCV patients during interferon therapy. A novel concentration method using PEG in HCV core Ag assay system seems to be useful for assessing and monitoring interferon treatment for HCV.

  8. Improved Accuracy of Density Functional Theory Calculations for CO2 Reduction and Metal-Air Batteries

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    .e. the electrocatalytic reduction of CO2 and metal-air batteries. In theoretical studies of electrocatalytic CO2 reduction, calculated DFT-level enthalpies of reaction for CO2reduction to various products are significantly different from experimental values[1-3]. In theoretical studies of metal-air battery reactions......, but compared to determine patterns in functional dependence. The method is exemplified by ensemble comparison of reaction enthalpy to methanol and formic acid depicted in Figure 1. The functional dependence on the calculated reaction enthalpy to methanol is twice as large as that to formic acid. This suggests...... errors in DFT-level computational electrocatalytic CO2reduction is hence identified. The new insight adds increased accuracy e.g., for reaction to formic acid, where the experimental enthalpy of reaction is 0.15 eV. Previously, this enthalpy has been calculated without and with correctional approaches...

  9. Reconciling multiple data sources to improve accuracy of large-scale prediction of forest disease incidence

    Science.gov (United States)

    Hanks, E.M.; Hooten, M.B.; Baker, F.A.

    2011-01-01

    Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.

  10. The Application of Digital Pathology to Improve Accuracy in Glomerular Enumeration in Renal Biopsies.

    Directory of Open Access Journals (Sweden)

    Avi Z Rosenberg

    Full Text Available In renal biopsy reporting, quantitative measurements, such as glomerular number and percentage of globally sclerotic glomeruli, is central to diagnostic accuracy and prognosis. The aim of this study is to determine the number of glomeruli and percent globally sclerotic in renal biopsies by means of registration of serial tissue sections and manual enumeration, compared to the numbers in pathology reports from routine light microscopic assessment.We reviewed 277 biopsies from the Nephrotic Syndrome Study Network (NEPTUNE digital pathology repository, enumerating 9,379 glomeruli by means of whole slide imaging. Glomerular number and the percentage of globally sclerotic glomeruli are values routinely recorded in the official renal biopsy pathology report from the 25 participating centers. Two general trends in reporting were noted: total number per biopsy or average number per level/section. Both of these approaches were assessed for their accuracy in comparison to the analogous numbers of annotated glomeruli on WSI.The number of glomeruli annotated was consistently higher than those reported (p<0.001; this difference was proportional to the number of glomeruli. In contrast, percent globally sclerotic were similar when calculated on total glomeruli, but greater in FSGS when calculated on average number of glomeruli (p<0.01. The difference in percent globally sclerotic between annotated and those recorded in pathology reports was significant when global sclerosis is greater than 40%.Although glass slides were not available for direct comparison to whole slide image annotation, this study indicates that routine manual light microscopy assessment of number of glomeruli is inaccurate, and the magnitude of this error is proportional to the total number of glomeruli.

  11. Predicting sulfotyrosine sites using the random forest algorithm with significantly improved prediction accuracy

    Directory of Open Access Journals (Sweden)

    Yang Zheng

    2009-10-01

    Full Text Available Abstract Background Tyrosine sulfation is one of the most important posttranslational modifications. Due to its relevance to various disease developments, tyrosine sulfation has become the target for drug design. In order to facilitate efficient drug design, accurate prediction of sulfotyrosine sites is desirable. A predictor published seven years ago has been very successful with claimed prediction accuracy of 98%. However, it has a particularly low sensitivity when predicting sulfotyrosine sites in some newly sequenced proteins. Results A new approach has been developed for predicting sulfotyrosine sites using the random forest algorithm after a careful evaluation of seven machine learning algorithms. Peptides are formed by consecutive residues symmetrically flanking tyrosine sites. They are then encoded using an amino acid hydrophobicity scale. This new approach has increased the sensitivity by 22%, the specificity by 3%, and the total prediction accuracy by 10% compared with the previous predictor using the same blind data. Meanwhile, both negative and positive predictive powers have been increased by 9%. In addition, the random forest model has an excellent feature for ranking the residues flanking tyrosine sites, hence providing more information for further investigating the tyrosine sulfation mechanism. A web tool has been implemented at http://ecsb.ex.ac.uk/sulfotyrosine for public use. Conclusion The random forest algorithm is able to deliver a better model compared with the Hidden Markov Model, the support vector machine, artificial neural networks, and others for predicting sulfotyrosine sites. The success shows that the random forest algorithm together with an amino acid hydrophobicity scale encoding can be a good candidate for peptide classification.

  12. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    Science.gov (United States)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  13. Improving the accuracy of macroeconomic forecasts made by National Commission of Prognosis and Institute of Economic Forecasting for Romania

    Directory of Open Access Journals (Sweden)

    Mihaela Bratu (Simionescu

    2012-01-01

    Full Text Available In this article, the accuracy of forecasts for inflation rate, unemployment, exchange rate and GDP index provided by Institute of Economic Forecasting (IEF and National Commission of Prognosis (NCP was assessed for the forecasting horizon 2004-2011. The hypothesis that combined forecasts is a suitable strategy of improving the predictions accuracy was tested. Only for the unemployment rate the combined forecasts based on IEF and NCP evaluations performed better than the initial forecasts. For inflation and exchange rate Dobrescu model of IEF provided better predictions, but the combined ones were more accurate than NCP expectations. The Dobrescu model predictions combined with ARMA static respectively dynamic forecasts and NCP estimations combined with ARMA static prognosis, respectively Dobrescu forecasts using EQ scheme for unemployment on a horizon of 2 years (2010-2011 improved the accuracy of forecasts made by both institutions, the combined predictions based on Dobrescu predictions and ARMA static ones using OPT scheme being the most accurate, according to U1 Theils’ statistic.

  14. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    Science.gov (United States)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  15. Intellijoint HIP®: a 3D mini-optical navigation tool for improving intraoperative accuracy during total hip arthroplasty

    Science.gov (United States)

    Paprosky, Wayne G; Muir, Jeffrey M

    2016-01-01

    Total hip arthroplasty is an increasingly common procedure used to address degenerative changes in the hip joint due to osteoarthritis. Although generally associated with good results, among the challenges associated with hip arthroplasty are accurate measurement of biomechanical parameters such as leg length, offset, and cup position, discrepancies of which can lead to significant long-term consequences such as pain, instability, neurological deficits, dislocation, and revision surgery, as well as patient dissatisfaction and, increasingly, litigation. Current methods of managing these parameters are limited, with manual methods such as outriggers or calipers being used to monitor leg length; however, these are susceptible to small intraoperative changes in patient position and are therefore inaccurate. Computer-assisted navigation, while offering improved accuracy, is expensive and cumbersome, in addition to adding significantly to procedural time. To address the technological gap in hip arthroplasty, a new intraoperative navigation tool (Intellijoint HIP®) has been developed. This innovative, 3D mini-optical navigation tool provides real-time, intraoperative data on leg length, offset, and cup position and allows for improved accuracy and precision in component selection and alignment. Benchtop and simulated clinical use testing have demonstrated excellent accuracy, with the navigation tool able to measure leg length and offset to within <1 mm and cup position to within <1° in both anteversion and inclination. This study describes the indications, procedural technique, and early accuracy results of the Intellijoint HIP surgical tool, which offers an accurate and easy-to-use option for hip surgeons to manage leg length, offset, and cup position intraoperatively. PMID:27920583

  16. Intellijoint HIP®: a 3D mini-optical navigation tool for improving intraoperative accuracy during total hip arthroplasty

    Directory of Open Access Journals (Sweden)

    Paprosky WG

    2016-11-01

    Full Text Available Wayne G Paprosky,1,2 Jeffrey M Muir3 1Department of Orthopedics, Section of Adult Joint Reconstruction, Department of Orthopedics, Rush University Medical Center, Rush–Presbyterian–St Luke’s Medical Center, Chicago, 2Central DuPage Hospital, Winfield, IL, USA; 3Intellijoint Surgical, Inc, Waterloo, ON, Canada Abstract: Total hip arthroplasty is an increasingly common procedure used to address degenerative changes in the hip joint due to osteoarthritis. Although generally associated with good results, among the challenges associated with hip arthroplasty are accurate measurement of biomechanical parameters such as leg length, offset, and cup position, discrepancies of which can lead to significant long-term consequences such as pain, instability, neurological deficits, dislocation, and revision surgery, as well as patient dissatisfaction and, increasingly, litigation. Current methods of managing these parameters are limited, with manual methods such as outriggers or calipers being used to monitor leg length; however, these are susceptible to small intraoperative changes in patient position and are therefore inaccurate. Computer-assisted navigation, while offering improved accuracy, is expensive and cumbersome, in addition to adding significantly to procedural time. To address the technological gap in hip arthroplasty, a new intraoperative navigation tool (Intellijoint HIP® has been developed. This innovative, 3D mini-optical navigation tool provides real-time, intraoperative data on leg length, offset, and cup position and allows for improved accuracy and precision in component selection and alignment. Benchtop and simulated clinical use testing have demonstrated excellent accuracy, with the navigation tool able to measure leg length and offset to within <1 mm and cup position to within <1° in both anteversion and inclination. This study describes the indications, procedural technique, and early accuracy results of the Intellijoint HIP

  17. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    Science.gov (United States)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  18. Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly

    Science.gov (United States)

    Sameith, Katrin; Roscito, Juliana G.

    2017-01-01

    Next-generation sequencers such as Illumina can now produce reads up to 300 bp with high throughput, which is attractive for genome assembly. A first step in genome assembly is to computationally correct sequencing errors. However, correcting all errors in these longer reads is challenging. Here, we show that reads with remaining errors after correction often overlap repeats, where short erroneous k-mers occur in other copies of the repeat. We developed an iterative error correction pipeline that runs the previously published String Graph Assembler (SGA) in multiple rounds of k-mer-based correction with an increasing k-mer size, followed by a final round of overlap-based correction. By combining the advantages of small and large k-mers, this approach corrects more errors in repeats and minimizes the total amount of erroneous reads. We show that higher read accuracy increases contig lengths two to three times. We provide SGA-Iteratively Correcting Errors (https://github.com/hillerlab/IterativeErrorCorrection/) that implements iterative error correction by using modules from SGA. PMID:26868358

  19. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    Science.gov (United States)

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  20. Improving the accuracy and efficiency of identity-by-descent detection in population data.

    Science.gov (United States)

    Browning, Brian L; Browning, Sharon R

    2013-06-01

    Segments of indentity-by-descent (IBD) detected from high-density genetic data are useful for many applications, including long-range phase determination, phasing family data, imputation, IBD mapping, and heritability analysis in founder populations. We present Refined IBD, a new method for IBD segment detection. Refined IBD achieves both computational efficiency and highly accurate IBD segment reporting by searching for IBD in two steps. The first step (identification) uses the GERMLINE algorithm to find shared haplotypes exceeding a length threshold. The second step (refinement) evaluates candidate segments with a probabilistic approach to assess the evidence for IBD. Like GERMLINE, Refined IBD allows for IBD reporting on a haplotype level, which facilitates determination of multi-individual IBD and allows for haplotype-based downstream analyses. To investigate the properties of Refined IBD, we simulate SNP data from a model with recent superexponential population growth that is designed to match United Kingdom data. The simulation results show that Refined IBD achieves a better power/accuracy profile than fastIBD or GERMLINE. We find that a single run of Refined IBD achieves greater power than 10 runs of fastIBD. We also apply Refined IBD to SNP data for samples from the United Kingdom and from Northern Finland and describe the IBD sharing in these data sets. Refined IBD is powerful, highly accurate, and easy to use and is implemented in Beagle version 4.

  1. Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly.

    Science.gov (United States)

    Sameith, Katrin; Roscito, Juliana G; Hiller, Michael

    2017-01-01

    Next-generation sequencers such as Illumina can now produce reads up to 300 bp with high throughput, which is attractive for genome assembly. A first step in genome assembly is to computationally correct sequencing errors. However, correcting all errors in these longer reads is challenging. Here, we show that reads with remaining errors after correction often overlap repeats, where short erroneous k-mers occur in other copies of the repeat. We developed an iterative error correction pipeline that runs the previously published String Graph Assembler (SGA) in multiple rounds of k-mer-based correction with an increasing k-mer size, followed by a final round of overlap-based correction. By combining the advantages of small and large k-mers, this approach corrects more errors in repeats and minimizes the total amount of erroneous reads. We show that higher read accuracy increases contig lengths two to three times. We provide SGA-Iteratively Correcting Errors (https://github.com/hillerlab/IterativeErrorCorrection/) that implements iterative error correction by using modules from SGA.

  2. Parametric Study to Improve Subpixel Accuracy of Nitric Oxide Tagging Velocimetry with Image Preprocessing

    Directory of Open Access Journals (Sweden)

    Ravi Teja Vedula

    2017-01-01

    Full Text Available Biacetyl phosphorescence has been the commonly used molecular tagging velocimetry (MTV technique to investigate in-cylinder flow evolution and cycle-to-cycle variations in an optical engine. As the phosphorescence of biacetyl tracer deteriorates in the presence of oxygen, nitrogen was adopted as the working medium in the past. Recently, nitrous oxide MTV technique was employed to measure the velocity profile of an air jet. The authors here plan to investigate the potential application of this technique for engine flow studies. A possible experimental setup for this task indicated different permutations of image signal-to-noise ratio (SNR and laser line width. In the current work, a numerical analysis is performed to study the effect of these two factors on displacement error in MTV image processing. Also, several image filtering techniques were evaluated and the performance of selected filters was analyzed in terms of enhancing the image quality and minimizing displacement errors. The flow displacement error without image preprocessing was observed to be inversely proportional to SNR and directly proportional to laser line width. The mean filter resulted in the smallest errors for line widths smaller than 9 pixels. The effect of filter size on subpixel accuracy showed that error levels increased as the filter size increased.

  3. Shape Optimization of the Turbomachine Channel by a Gradient Method -Accuracy Improvement

    Institute of Scientific and Technical Information of China (English)

    Marek Rabiega

    2003-01-01

    An algorithm of the gradient method of the channel shape optimization has been built on the basis of 3D equations of mass, momentum and energy conservation in the fluid flow. The gradient of the functional that is posed for minimization has been calculated by two methods, via sensitivities and - for comparison - by the finite difference approximation. The equations for sensitivities have been generated through a differentiate-then-discretize approach. The exemplary optimization of the blade shape of the centrifugal compressor wheel has been carried out for the inviscid gas flow governed by Euler equations with a non-uniform mass flow distribution as the inlet boundary condition. Mixing losses have been minimized downstream the outlet of the centrifugal wheel in this exemplary optimization. The results of the optimization problem accomplished by the two above-mentioned methods have been presented. In the case sparse grids have been used, the method with the gradient approximated by finite differences has been found to be more consistent. The discretization accuracy has turned out to be crucial for the consistency of the gradient method via sensitivities.

  4. Improving Inverse Dynamics Accuracy in a Planar Walking Model Based on Stable Reference Point

    Directory of Open Access Journals (Sweden)

    Alaa Abdulrahman

    2014-01-01

    Full Text Available Physiologically and biomechanically, the human body represents a complicated system with an abundance of degrees of freedom (DOF. When developing mathematical representations of the body, a researcher has to decide on how many of those DOF to include in the model. Though accuracy can be enhanced at the cost of complexity by including more DOF, their necessity must be rigorously examined. In this study a planar seven-segment human body walking model with single DOF joints was developed. A reference point was added to the model to track the body’s global position while moving. Due to the kinematic instability of the pelvis, the top of the head was selected as the reference point, which also assimilates the vestibular sensor position. Inverse dynamics methods were used to formulate and solve the equations of motion based on Newton-Euler formulae. The torques and ground reaction forces generated by the planar model during a regular gait cycle were compared with similar results from a more complex three-dimensional OpenSim model with muscles, which resulted in correlation errors in the range of 0.9–0.98. The close comparison between the two torque outputs supports the use of planar models in gait studies.

  5. STEREO MATCHING ALGORITHM BASED ON ILLUMINATION CONTROL TO IMPROVE THE ACCURACY

    Directory of Open Access Journals (Sweden)

    Rostam Affendi Hamzah

    2016-02-01

    Full Text Available This paper presents a new method of pixel based stereo matching algorithm using illumination control. The state of the art algorithm for absolute difference (AD works fast, but only precise at low texture areas. Besides, it is sensitive to radiometric distortions (i.e., contrast or brightness and discontinuity areas. To overcome the problem, this paper proposes an algorithm that utilizes an illumination control to enhance the image quality of absolute difference (AD matching. Thus, pixel intensities at this step are more consistent, especially at the object boundaries. Then, the gradient difference value is added to empower the reduction of the radiometric errors. The gradient characteristics are known for its robustness with regard to the radiometric errors. The experimental results demonstrate that the proposed algorithm performs much better when using a standard benchmarking dataset from the Middlebury Stereo Vision dataset. The main contribution of this work is a reduction of discontinuity errors that leads to a significant enhancement on matching quality and accuracy of disparity maps.

  6. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    Science.gov (United States)

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  7. Investigations to improve and assess the accuracy of computational fluid dynamic based explosion models

    NARCIS (Netherlands)

    Popat, N.R.; Catlin, C.A.; Arntzen, B.J.; Lindstedt, R.P.; Hjertager, B.H.; Solberg, T.; Saeter, O.; Berg, A.C. van den

    1996-01-01

    A summary is given of part of the CEC co-sponsored project MERGE (Modelling and Experimental Research into Gas Explosions). The objective of this part of the project was to provide improved Computational Fluid Dynamic explosion models with the potential for use in hazard assessments. Five organisati

  8. Electron Microprobe Analysis of Hf in Zircon: Suggestions for Improved Accuracy of a Difficult Measurement

    Science.gov (United States)

    Fournelle, J.; Hanchar, J. M.

    2013-12-01

    It is not commonly recognized as such, but the accurate measurement of Hf in zircon is not a trivial analytical issue. This is important to assess because Hf is often used as an internal standard for trace element analyses of zircon by LA-ICPMS. The issues pertaining to accuracy revolve around: (1) whether the Hf Ma or the La line is used; (2) what accelerating voltage is applied if Zr La is also measured, and (3) what standard for Hf is used. Weidenbach, et al.'s (2004) study of the 91500 zircon demonstrated the spread (in accuracy) of possible EPMA values for six EPMA labs, 2 of which used Hf Ma, 3 used Hf La, and one used Hf Lb, and standards ranged from HfO2, a ZrO2-HfO2 compound, Hf metal, and hafnon. Weidenbach, et al., used the ID-TIMS values as the correct value (0.695 wt.% Hf.), for which not one of the EPMA labs came close to that value (3 were low and 3 were high). Those data suggest: (1) that there is a systematic underestimation error of the 0.695 wt% Hf (ID-TIMS Hf) value if Hf Ma is used; most likely an issue with the matrix correction, as the analytical lines and absorption edges of Zr La, Si Ka and Hf Ma are rather tightly packed in the electromagnetic spectrum. Mass absorption coefficients are easily in error (e.g., Donovan's determination of the MAC of Hf by Si Ka of 5061 differs from the typically used Henke value of 5449 (Donovan et al, 2002); and (2) For utilization of the Hf La line, however, the second order Zr Ka line interferes with Hf La if the accelerating voltage is greater than 17.99 keV. If this higher keV is used and differential mode PHA is applied, only a portion of the interference is removed (e.g., removal of escape peaks), causing an overestimation of Hf content. Unfortunately, it is virtually impossible to apply an interference correction in this case, as it is impossible to locate Hf-free Zr probe standard. We have examined many of the combinations used by those six EPMA labs and concluded that the optimal EPMA is done with Hf

  9. Improving accuracy for identifying related PubMed queries by an integrated approach.

    Science.gov (United States)

    Lu, Zhiyong; Wilbur, W John

    2009-10-01

    PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users' search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments.

  10. An Approach to Improving the Retrieval Accuracy of Oceanic Constituents in Case Ⅱ Waters

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tinglu; Frank Fell

    2004-01-01

    In the present paper, a method is proposed to improve the performance of Artificial Neural Network(ANN)based algorithms for the retrieval of oceanic constituents in Case Ⅱ waters. The ANN-based algorithms have been developed based on a constraint condition, which represents, to a certain degree, the correlation between suspended particulate matter(SPM)and pigment(CHL), coloured dissolved organic matter(CDOM)and CHL, as well as CDOM and SPM, found in Case Ⅱ waters. Compared with the ANN-based algorithm developed without a constraint condition, the performance of ANN-based algorithms developed with a constraint conditions is much better for the retrieval of CHL and CDOM, especially in the case of high noise levels; however, there is not significant improvement for the retrieval of SPM.

  11. Accuracy Improvement of Zenith Tropospheric Delay Estimation Based on GPS Precise Point Positioning Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHU Qinglin; ZHAO Zhenwei; LIN Leke; WU Zhensen

    2010-01-01

    In the precise point positioning (PPP), some impossible accurately simulated systematic errors still remained in the GPS observations and will inevitably degrade the precision of zenith tropospheric delay (ZTD) estimation. The stochastic models used in the GPS PPP mode are compared. In this paper, the research results show that the precision of PPP-derived ZTD can be obviously improved through selecting a suitable stochastic model for GPS measurements. Low-elevation observations can cover more troposphere information that can improve the estimation of ZTD. A new stochastic model based on satellite low elevation cosine square is presented. The results show that the stochastic model using satellite elevation-based cosine square function is better than previous stochastic models.

  12. Improved PCR-Based Detection of Soil Transmitted Helminth Infections Using a Next-Generation Sequencing Approach to Assay Design

    Science.gov (United States)

    Pilotte, Nils; Papaiakovou, Marina; Grant, Jessica R.; Bierwert, Lou Ann; Llewellyn, Stacey; McCarthy, James S.; Williams, Steven A.

    2016-01-01

    Background The soil transmitted helminths are a group of parasitic worms responsible for extensive morbidity in many of the world’s most economically depressed locations. With growing emphasis on disease mapping and eradication, the availability of accurate and cost-effective diagnostic measures is of paramount importance to global control and elimination efforts. While real-time PCR-based molecular detection assays have shown great promise, to date, these assays have utilized sub-optimal targets. By performing next-generation sequencing-based repeat analyses, we have identified high copy-number, non-coding DNA sequences from a series of soil transmitted pathogens. We have used these repetitive DNA elements as targets in the development of novel, multi-parallel, PCR-based diagnostic assays. Methodology/Principal Findings Utilizing next-generation sequencing and the Galaxy-based RepeatExplorer web server, we performed repeat DNA analysis on five species of soil transmitted helminths (Necator americanus, Ancylostoma duodenale, Trichuris trichiura, Ascaris lumbricoides, and Strongyloides stercoralis). Employing high copy-number, non-coding repeat DNA sequences as targets, novel real-time PCR assays were designed, and assays were tested against established molecular detection methods. Each assay provided consistent detection of genomic DNA at quantities of 2 fg or less, demonstrated species-specificity, and showed an improved limit of detection over the existing, proven PCR-based assay. Conclusions/Significance The utilization of next-generation sequencing-based repeat DNA analysis methodologies for the identification of molecular diagnostic targets has the ability to improve assay species-specificity and limits of detection. By exploiting such high copy-number repeat sequences, the assays described here will facilitate soil transmitted helminth diagnostic efforts. We recommend similar analyses when designing PCR-based diagnostic tests for the detection of other

  13. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    Science.gov (United States)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  14. Improving Accuracy and Coverage of Data Mining Systems that are Built from Noisy Datasets: A New Model

    Directory of Open Access Journals (Sweden)

    Luai A. Al Shalabi

    2009-01-01

    Full Text Available Problem statement: Noise within datasets has to be dealt with under most circumstances. This noise includes misclassified data or information as well as missing data or information. Simple human error is considered as misclassification. These errors will decrease the accuracy of the data mining system so it will not be likely to be used. The objective was to propose an effective algorithm to deal with noise which is represented by missing data in datasets. Approach: A model for improving the accuracy and coverage of data mining systems was proposed and the algorithm of this model was constructed. The algorithm was dealing with missing values in datasets. It splits the original dataset into two new datasets; one contains tuples that have no missing values and the other one contains tuples that have missing values. The proposed algorithm was applied to each of the two new datasets. It finds the reduct of each of them and then it merges the new reducts into one new dataset which will be ready for training. Results: The results showed interesting as it increases the accuracy and coverage of the tested dataset compared to the traditional models. Conclusion: The proposed algorithm performs effectively and generates better results than the previous ones.

  15. A simple modification to improve the accuracy of methylation-sensitive restriction enzyme quantitative polymerase chain reaction.

    Science.gov (United States)

    Krygier, Magdalena; Podolak-Popinigis, Justyna; Limon, Janusz; Sachadyn, Paweł; Stanisławska-Sachadyn, Anna

    2016-05-01

    DNA digestion with endonucleases sensitive to CpG methylation such as HpaII followed by polymerase chain reaction (PCR) quantitation is commonly used in molecular studies as a simple and inexpensive solution for assessment of region-specific DNA methylation. We observed that the results of such analyses were highly overestimated if mock-digested samples were applied as the reference. We determined DNA methylation levels in several promoter regions in two setups implementing different references: mock-digested and treated with a restriction enzyme that has no recognition sites within examined amplicons. Fragmentation of reference templates allowed removing the overestimation effect, thereby improving measurement accuracy.

  16. New possibilities for improving the accuracy of parameter calculations for cascade gamma-ray decay of heavy nuclei

    CERN Document Server

    Sukhovoj, A M; Khitrov, V A

    2001-01-01

    The level density and radiative strength functions which accurately reproduce the experimental intensity of two- step cascades after thermal neutron capture and the total radiative widths of the compound states were applied to calculate the total gamma-ray spectra from the (n,gamma) reaction. In some cases, analysis showed far better agreement with experiment and gave insight into possible ways in which these parameters need to be corrected for further improvement of calculation accuracy for the cascade gamma-decay of heavy nuclei.

  17. IMPROVING THE GRAMMATICAL ACCURACY OF THE SPOKEN ENGLISH OF INDONESIAN INTERNATIONAL KINDERGARTEN STUDENTS

    Directory of Open Access Journals (Sweden)

    IMELDA GOZALI

    2014-07-01

    Full Text Available The need to improve the spoken English of kindergarten students in an international preschool in Surabaya prompted this Classroom Action Research (CAR. It involved the implementation of Form-Focused Instruction (FFI strategy coupled with Corrective Feedback (CF in Grammar lessons. Four grammar topics were selected, namely Regular Plural form, Subject Pronoun, Auxiliary Verbs Do/Does, and Irregular Past Tense Verbs as they were deemed to be the morpho-syntax which children acquire early in life based on the order of acquisition in Second Language Acquisition. The results showed that FFI and CF contributed to the improvement of the spoken grammar in varying degrees, depending on the academic performance, personality, and specific linguistic traits of the students. Students with high academic achievement could generally apply the grammar points taught after the FFI lessons in their daily speech. Students who were rather talkative were sensitive to the CF and could provide self-repair when prompted. Those with lower academic performance generally did not benefit much from the FFI lessons nor the CF.

  18. Compensation of Environment and Motion Error for Accuracy Improvement of Ultra-Precision Lathe

    Science.gov (United States)

    Kwac, Lee-Ku; Kim, Jae-Yeol; Kim, Hong-Gun

    The technological manipulation of the piezo-electric actuator could compensate for the errors of the machining precision during the process of machining which lead to an elevation and enhancement in overall precisions. This manipulation is a very convenient method to advance the precision for nations without the solid knowledge of the ultra-precision machining technology. There were 2 divisions of researches conducted to develop the UPCU for precision enhancement of the current lathe and compensation for the environmental errors as shown below; The first research was designed to measure and real-time correct any deviations in variety of areas to achieve a compensation system through more effective optical fiber laser encoder than the encoder resolution which was currently used in the existing lathe. The deviations for a real-time correction were composed of followings; the surrounding air temperature, the thermal deviations of the machining materials, the thermal deviations in spindles, and the overall thermal deviation occurred due to the machine structure. The second research was to develop the UPCU and to improve the machining precision through the ultra-precision positioning and the real-time operative error compensation. The ultimate goal was to improve the machining precision of the existing lathe through completing the 2 research tasks mentioned above.

  19. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    Science.gov (United States)

    Starn, J. J.

    2013-12-01

    -flow finite-difference transport simulations (MT3DMS). Results show more accurate simulation of pumping-well BTCs for a given grid cell size when using analytical solutions. The code base is extended to transient flow and BTCs are compared to results from MT3DMS simulations. Results show the particle-based solutions can resolve transient behavior using coarser model grids with far less computational effort than MT3DMS. The effect of simulation accuracy on parameter estimates (porosity) also is investigated. Porosity estimated using more accurate analytical solutions are less biased than in synthetic finite-difference transport simulations, which tend to be biased by coarseness of the grid. Eliminating the bias by using a finer grid comes at the expense of much larger computational effort. Finally, the code base was applied to an actual groundwater-flow model of Salt Lake Valley, Utah. Particle simulations using the Python code base compare well with finite-difference simulations, but with less computational effort, and have the added advantage of delineating flow paths, thus explicitly connecting solute source areas with receptors, and producing complete particle-age distributions. Knowledge of source areas and age distribution greatly enhances the analysis of dissolved solids data in Salt Lake Valley.

  20. Improving the accuracy: volatility modeling and forecasting using high-frequency data and the variational component

    Directory of Open Access Journals (Sweden)

    Manish Kumar

    2010-06-01

    Full Text Available In this study, we predict the daily volatility of the S&P CNX NIFTY market index of India using the basic ‘heterogeneous autoregressive’ (HAR and its variant. In doing so, we estimated several HAR and Log form of HAR models using different regressor. The different regressors were obtained by extracting the jump and continuous component and the threshold jump and continuous component from the realized volatility. We also tried to investigate whether dividing volatility into simple and threshold jumps and continuous variation yields a substantial improvement in volatility forecasting or not. The results provide the evidence that inclusion of realized bipower variance in the HAR models helps in predicting future volatility.

  1. Improved Accuracy and Safety of Intracorporeal Transpedicular Bone Grafting - using Contrast Impregnated Bone: A Case Report

    Directory of Open Access Journals (Sweden)

    CK Chiu

    2014-11-01

    Full Text Available A method of transpedicular bone grafting using contrast impregnated bone to improve the visualization of bone graft on the image intensifier is reported. A - 36-year old man who had sustained traumatic burst fracture of T12 vertebra, with Load-Sharing Classification (LSC score of 8, was treated with posterior short segment fusion from T11 to L1 with transpedicular bone graft of T12 vertebra. We were able to correct the kyphotic end plate angle (EPA from 19º to 1.4º. Anterior bone graft augmentation was achieved with contrast enhaced transpedicular bone grafts. At six months follow up, CT scan showed good bony integration of the anterior column with EPA of 4.5º and two years later, radiographs showed EPA of 7.6 º.

  2. Improving ECG classification accuracy using an ensemble of neural network modules.

    Directory of Open Access Journals (Sweden)

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  3. Improving the Accuracy of a Heliocentric Potential (HCP) Prediction Model for the Aviation Radiation Dose

    Science.gov (United States)

    Hwang, Junga; Yoon, Kyoung-Won; Jo, Gyeongbok; Noh, Sung-Jun

    2016-12-01

    The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs), flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP) prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA). However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015). In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1) real-time daily sunspot assessments, (2) predictions of the daily HCP by our prediction algorithm, and (3) calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  4. Improvement of orbit determination accuracy for Beidou Navigation Satellite System with Two-way Satellite Time Frequency Transfer

    Science.gov (United States)

    Tang, Chengpan; Hu, Xiaogong; Zhou, Shanshi; Guo, Rui; He, Feng; Liu, Li; Zhu, Lingfeng; Li, Xiaojie; Wu, Shan; Zhao, Gang; Yu, Yang; Cao, Yueling

    2016-10-01

    The Beidou Navigation Satellite System (BDS) manages to estimate simultaneously the orbits and clock offsets of navigation satellites, using code and carrier phase measurements of a regional network within China. The satellite clock offsets are also directly measured with Two-way Satellite Time Frequency Transfer (TWSTFT). Satellite laser ranging (SLR) residuals and comparisons with the precise ephemeris indicate that the radial error of GEO satellites is much larger than that of IGSO and MEO satellites and that the BDS orbit accuracy is worse than GPS. In order to improve the orbit determination accuracy for BDS, a new orbit determination strategy is proposed, in which the satellite clock measurements from TWSTFT are fixed as known values, and only the orbits of the satellites are solved. However, a constant systematic error at the nanosecond level can be found in the clock measurements, which is obtained and then corrected by differencing the clock measurements and the clock estimates from orbit determination. The effectiveness of the new strategy is verified by a GPS regional network orbit determination experiment. With the IGS final clock products fixed, the orbit determination and prediction accuracy for GPS satellites improve by more than 50% and the 12-h prediction User Range Error (URE) is better than 0.12 m. By processing a 25-day of measurement from the BDS regional network, an optimal strategy for the satellite-clock-fixed orbit determination is identified. User Equivalent Ranging Error is reduced by 27.6% for GEO satellites, but no apparent reduction is found for IGSO/MEO satellites. The SLR residuals exhibit reductions by 59% and 32% for IGSO satellites but no reductions for GEO and MEO satellites.

  5. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements

    Science.gov (United States)

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-01-01

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject’s movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation. PMID:27854288

  6. Improvement of Accuracy in Flow Immunosensor System by Introduction of Poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl Phosphate

    Directory of Open Access Journals (Sweden)

    Yusuke Fuchiwaki

    2011-01-01

    Full Text Available In order to improve the accuracy of immunosensor systems, poly-2-[3-(methacryloylaminopropylammonio]ethyl 3-aminopropyl phosphate (poly-3MAm3AP, which includes both phosphorylcholine and amino groups, was synthesized and applied to the preparation of antibody-immobilized beads. Acting as an antibody-immobilizing material, poly-3MAm3AP is expected to significantly lower nonspecific adsorption due to the presence of the phosphorylcholine group and recognize large numbers of analytes due to the increase in antibody-immobilizing sites. The elimination of nonspecific adsorption was compared between the formation of a blocking layer on antibody-immobilized beads and the introduction of a material to combine antibody with beads. Determination with specific and nonspecific antibodies was then investigated for the estimation of signal-to-noise ratio. Signal intensities with superior signal-to-noise ratios were obtained when poly-3MAm3AP was introduced. This may be due to the increase in antibody-immobilizing sites and the extended space for antigen-antibody interaction resulting from the electrostatic repulsion of poly-3MAm3AP. Thus, the application of poly-3MAm3AP coatings to immunoassay beads was able to improve the accuracy of flow immunosensor systems.

  7. Classification algorithms to improve the accuracy of identifying patients hospitalized with community-acquired pneumonia using administrative data.

    Science.gov (United States)

    Yu, O; Nelson, J C; Bounds, L; Jackson, L A

    2011-09-01

    In epidemiological studies of community-acquired pneumonia (CAP) that utilize administrative data, cases are typically defined by the presence of a pneumonia hospital discharge diagnosis code. However, not all such hospitalizations represent true CAP cases. We identified 3991 hospitalizations during 1997-2005 in a managed care organization, and validated them as CAP or not by reviewing medical records. To improve the accuracy of CAP identification, classification algorithms that incorporated additional administrative information associated with the hospitalization were developed using the classification and regression tree analysis. We found that a pneumonia code designated as the primary discharge diagnosis and duration of hospital stay improved the classification of CAP hospitalizations. Compared to the commonly used method that is based on the presence of a primary discharge diagnosis code of pneumonia alone, these algorithms had higher sensitivity (81-98%) and positive predictive values (82-84%) with only modest decreases in specificity (48-82%) and negative predictive values (75-90%).

  8. Improving Ocean Color Data Products using a Purely Empirical Approach: Reducing the Requirement for Radiometric Calibration Accuracy

    Science.gov (United States)

    Gregg, Watson

    2008-01-01

    Radiometric calibration is the foundation upon which ocean color remote sensing is built. Quality derived geophysical products, such as chlorophyll, are assumed to be critically dependent upon the quality of the radiometric calibration. Unfortunately, the goals of radiometric calibration are not typically met in global and large-scale regional analyses, and are especially deficient in coastal regions. The consequences of the uncertainty in calibration are very large in terms of global and regional ocean chlorophyll estimates. In fact, stability in global chlorophyll requires calibration uncertainty much greater than the goals, and outside of modern capabilities. Using a purely empirical approach, we show that stable and consistent global chlorophyll values can be achieved over very wide ranges of uncertainty. Furthermore, the approach yields statistically improved comparisons with in situ data, suggesting improved quality. The results suggest that accuracy requirements for radiometric calibration cab be reduced if alternative empirical approaches are used.

  9. Improved accuracy of acute graft-versus-host disease staging among multiple centers.

    Science.gov (United States)

    Levine, John E; Hogan, William J; Harris, Andrew C; Litzow, Mark R; Efebera, Yvonne A; Devine, Steven M; Reshef, Ran; Ferrara, James L M

    2014-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD.

  10. Improved Haptic Linear Lines for Better Movement Accuracy in Upper Limb Rehabilitation

    Directory of Open Access Journals (Sweden)

    Joan De Boeck

    2012-01-01

    Full Text Available Force feedback has proven to be beneficial in the domain of robot-assisted rehabilitation. According to the patients' personal needs, the generated forces may either be used to assist, support, or oppose their movements. In our current research project, we focus onto the upper limb training for MS (multiple sclerosis and CVA (cerebrovascular accident patients, in which a basic building block to implement many rehabilitation exercises was found. This building block is a haptic linear path: a second-order continuous path, defined by a list of points in space. Earlier, different attempts have been investigated to realize haptic linear paths. In order to have a good training quality, it is important that the haptic simulation is continuous up to the second derivative while the patient is enforced to follow the path tightly, even when low or no guiding forces are provided. In this paper, we describe our best solution to these haptic linear paths, discuss the weaknesses found in practice, and propose and validate an improvement.

  11. A patient-centered methodology that improves the accuracy of prognostic predictions in cancer.

    Directory of Open Access Journals (Sweden)

    Mohammed Kashani-Sabet

    Full Text Available Individualized approaches to prognosis are crucial to effective management of cancer patients. We developed a methodology to assign individualized 5-year disease-specific death probabilities to 1,222 patients with melanoma and to 1,225 patients with breast cancer. For each cancer, three risk subgroups were identified by stratifying patients according to initial stage, and prediction probabilities were generated based on the factors most closely related to 5-year disease-specific death. Separate subgroup probabilities were merged to form a single composite index, and its predictive efficacy was assessed by several measures, including the area (AUC under its receiver operating characteristic (ROC curve. The patient-centered methodology achieved an AUC of 0.867 in the prediction of 5-year disease-specific death, compared with 0.787 using the AJCC staging classification alone. When applied to breast cancer patients, it achieved an AUC of 0.907, compared with 0.802 using the AJCC staging classification alone. A prognostic algorithm produced from a randomly selected training subsample of 800 melanoma patients preserved 92.5% of its prognostic efficacy (as measured by AUC when the same algorithm was applied to a validation subsample containing the remaining patients. Finally, the tailored prognostic approach enhanced the identification of high-risk candidates for adjuvant therapy in melanoma. These results describe a novel patient-centered prognostic methodology with improved predictive efficacy when compared with AJCC stage alone in two distinct malignancies drawn from two separate populations.

  12. Drift Removal for Improving the Accuracy of Gait Parameters Using Wearable Sensor Systems

    Directory of Open Access Journals (Sweden)

    Ryo Takeda

    2014-12-01

    Full Text Available Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases.

  13. Improving patient care and accuracy of given doses in radiation therapy using in vivo dosimetry verification*

    Institute of Scientific and Technical Information of China (English)

    Ahmed Shawky Shawata; Tarek El Nimr; Khaled M. Elshahat

    2015-01-01

    Objective This work aims to verify and improve the dose given for cancer patients in radiation therapy by using diodes to enhance patient in vivo dosimetry on a routine basis. Some characteristics of two available semi-conductor diode dosimetry systems were evaluated.Methods The diodes had been calibrated to read the dose at Dmax below the surface. Correction factors of clinical relevance were quantified to convert the diode readings into patient dose. The diode was irradiated at various gantry angles (increments of 45°), various Field Sizes and various Source to Surface Distances (SSDs).Results The maximal response variation in the angular response with respect to an arbitrary angle of 0° was 1.9%, and the minimum variation was 0.5%. The response of the diode with respect to various field sizes showed the minimum and the maximum variations in the measured dose from the diode; the calculated doses were -1.6% (for 5 cm x 5 cm field size) and 6.6% (for 40 cm x 40 cm field size). The diode exhibited a significant perturbation in the response, which decreased with increasing SSD. No discrepancies larger than 5% were detected between the expected dose and the measured dose.Conclusion The results indicate that the diodes exhibit excellent linearity, dose reproducibility and minimal anisotropy; that they can be used with confidence for patient dose verification. Furthermore, diodes render real time verification of the dose delivered to patients.

  14. Drift Removal for Improving the Accuracy of Gait Parameters Using Wearable Sensor Systems

    Science.gov (United States)

    Takeda, Ryo; Lisco, Giulia; Fujisawa, Tadashi; Gastaldi, Laura; Tohyama, Harukazu; Tadano, Shigeru

    2014-01-01

    Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR) digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases. PMID:25490587

  15. Analyses to Verify and Improve the Accuracy of the Manufactured Home Energy Audit (MHEA)

    Energy Technology Data Exchange (ETDEWEB)

    Ternes, Mark P [ORNL; Gettings, Michael B [ORNL

    2008-12-01

    A series of analyses were performed to determine the reasons that the Manufactured Home Energy Audit (MHEA) over predicted space-heating energy savings as measured in a recent field test and to develop appropriate corrections to improve its performance. The study used the Home Energy Rating System (HERS) Building Energy Simulation Test (BESTEST) to verify that MHEA accurately calculates the UA-values of mobile home envelope components and space-heating energy loads as compared with other, well-accepted hourly energy simulation programs. The study also used the Procedures for Verification of RESNET Accredited HERS Software Tools to determine that MHEA accurately calculates space-heating energy consumptions for gas furnaces, heat pumps, and electric-resistance furnaces. Even though MHEA's calculations were shown to be correct from an engineering point of view, three modifications to MHEA's algorithms and use of a 0.6 correction factor were incorporated into MHEA to true-up its predicted savings to values measured in a recent field test. A simulated use of the revised version of MHEA in a weatherization program revealed that MHEA would likely still recommend a significant number of cost-effective weatherization measures in mobile homes (including ceiling, floor, and even wall insulation and far fewer storm windows). Based on the findings from this study, it was recommended that a revised version of MHEA with all the changes and modifications outlined in this report should be finalized and made available to the weatherization community as soon as possible, preferably in time for use within the 2009 Program Year.

  16. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    Science.gov (United States)

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers.

  17. Identifying the procedural gap and improved methods for maintaining accuracy during total hip arthroplasty.

    Science.gov (United States)

    Gross, Allan; Muir, Jeffrey M

    2016-09-01

    Osteoarthritis is a ubiquitous condition, affecting 26 million Americans each year, with up to 17% of adults over age 75 suffering from one variation of arthritis. The hip is one of the most commonly affected joints and while there are conservative options for treatment, as symptoms progress, many patients eventually turn to surgery to manage their pain and dysfunction. Early surgical options such as osteotomy or arthroscopy are reserved for younger, more active patients with less severe disease and symptoms. Total hip arthroplasty offers a viable solution for patients with severe degenerative changes; however, post-surgical discrepancies in leg length, offset and component malposition are common and cause significant complications. Such discrepancies are associated with consequences such as low back pain, neurological deficits, instability and overall patient dissatisfaction. Current methods for managing leg length and offset during hip arthroplasty are either inaccurate and susceptible to error or are cumbersome, expensive and lengthen surgical time. There is currently no viable option that provides accurate, real-time data to surgeons regarding leg length, offset and cup position in a cost-effective manner. As such, we hypothesize that a procedural gap exists in hip arthroplasty, a gap into which fall a large majority of arthroplasty patients who are at increased risk of complications following surgery. These complications and associated treatments place significant stress on the healthcare system. The costs associated with addressing leg length and offset discrepancies can be minor, requiring only heel lifts and short-term rehabilitation, but can also be substantial, with revision hip arthroplasty costs of up to $54,000 per procedure. The need for a cost-effective, simple to use and unobtrusive technology to address this procedural gap in hip arthroplasty and improve patient outcomes is of increasing importance. Given the aging of the population, the projected

  18. Improving Mars-GRAM: Increasing the Accuracy of Sensitivity Studies at Large Optical Depths

    Science.gov (United States)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    Extensively utilized for numerous mission applications, the Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model. In a Monte-Carlo mode, Mars-GRAM's perturbation modeling capability is used to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM has been found to be inexact when used during the Mars Science Laboratory (MSL) site selection process for sensitivity studies for MapYear=0 and large optical depth values such as tau=3. Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM) from the surface to 80 km altitude. Mars-GRAM with the MapYear parameter set to 0 utilizes results from a MGCM run with a fixed value of tau=3 at all locations for the entire year. Imprecise atmospheric density and pressure at all altitudes is a consequence of this use of MGCM with tau=3. Density factor values have been determined for tau=0.3, 1 and 3 as a preliminary fix to this pressure-density problem. These factors adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. These density factors are fixed values for all latitudes and Ls and are included in Mars-GRAM Release 1.3. Work currently being done, to derive better multipliers by including variations with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data, will be highlighted in the presentation. The TES limb data utilized in this process has been validated by a comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS). This comparison study was undertaken for locations on Mars of varying latitudes, Ls, and LTST. The more precise density factors will be included in Mars-GRAM 2005 Release 1.4 and thus improve the results of future sensitivity studies done for large

  19. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    Science.gov (United States)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2009-01-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  20. Improving accuracy in shallow-landslide susceptibility analyses at regional scale

    Science.gov (United States)

    Iovine, Giulio G. R.; Rago, Valeria; Frustaci, Francesco; Bruno, Claudia; Giordano, Stefania; Muto, Francesco; Gariano, Stefano L.; Pellegrino, Annamaria D.; Conforti, Massimo; Pascale, Stefania; Distilo, Daniela; Basile, Vincenzo; Soleri, Sergio; Terranova, Oreste G.

    2015-04-01

    Calabria (southern Italy) is particularly exposed to geo-hydrological risk. In the last decades, slope instabilities, mainly related to rainfall-induced landslides, repeatedly affected its territory. Among these, shallow landslides, characterized by abrupt onset and extremely rapid movements, are among the most destructive and dangerous phenomena for people and infrastructures. In this study, a susceptibility analysis to shallow landslides has been performed by refining a method recently applied in Costa Viola - central Calabria (Iovine et al., 2014), and only focusing on landslide source activations (regardless of their possible evolution as debris flows). A multivariate approach has been applied to estimating the presence/absence of sources, based on linear statistical relationships with a set of causal variables. The different classes of numeric causal variables have been determined by means of a data clustering method, designed to determine the best arrangement. A multi-temporal inventory map of sources, mainly obtained from interpretation of air photographs taken in 1954-1955, and in 2000, has been adopted to selecting the training and the validation sets. Due to the wide extend of the territory, the analysis has been iteratively performed by a step-by-step decreasing cell-size approach, by adopting greater spatial resolutions and thematic details (e.g. lithology, land-use, soil, morphometry, rainfall) for high-susceptible sectors. Through a sensitivity analysis, the weight of the considered factors in predisposing shallow landslides has been evaluated. The best set of variables has been identified by iteratively including one variable at a time, and comparing the results in terms of performance. Furthermore, susceptibility evaluations obtained through logistic regression have been compared to those obtained by applying neural networks. Obtained results may be useful to improve land utilization planning, and to select proper mitigation measures in shallow

  1. Solvent effects and improvements in the deoxyribose degradation assay for hydroxyl radical-scavenging.

    Science.gov (United States)

    Li, Xican

    2013-12-01

    The deoxyribose degradation assay is widely used to evaluate the hydroxyl (OH) radical-scavenging ability of food or medicines. We compared the hydroxyl radical-scavenging activity of 25 antioxidant samples prepared in ethanol solution with samples prepared after removing the ethanol (residue). The data suggested that there was an approximately 9-fold difference between assay results for the ethanol solution and residue samples. This indicated a strong alcoholic interference. To further study the mechanism, the scavenging activities of 18 organic solvents (including ethanol) were measured by the deoxyribose assay. Most pure organic solvents (especially alcohols) could effectively scavenge hydroxyl radicals. As hydroxyl radicals have extremely high reactivities, they will quickly react with surrounding solvent molecules. This shows that any organic solvent should be completely evaporated before measurement. The proposed method is regarded as a reliable hydroxyl radical-scavenging assay, suitable for all types of antioxidants.

  2. Improved accuracy of multiple ncRNA alignment by incorporating structural information into a MAFFT-based framework

    Directory of Open Access Journals (Sweden)

    Toh Hiroyuki

    2008-04-01

    Full Text Available Abstract Background Structural alignment of RNAs is becoming important, since the discovery of functional non-coding RNAs (ncRNAs. Recent studies, mainly based on various approximations of the Sankoff algorithm, have resulted in considerable improvement in the accuracy of pairwise structural alignment. In contrast, for the cases with more than two sequences, the practical merit of structural alignment remains unclear as compared to traditional sequence-based methods, although the importance of multiple structural alignment is widely recognized. Results We took a different approach from a straightforward extension of the Sankoff algorithm to the multiple alignments from the viewpoints of accuracy and time complexity. As a new option of the MAFFT alignment program, we developed a multiple RNA alignment framework, X-INS-i, which builds a multiple alignment with an iterative method incorporating structural information through two components: (1 pairwise structural alignments by an external pairwise alignment method such as SCARNA or LaRA and (2 a new objective function, Four-way Consistency, derived from the base-pairing probability of every sub-aligned group at every multiple alignment stage. Conclusion The BRAliBASE benchmark showed that X-INS-i outperforms other methods currently available in the sum-of-pairs score (SPS criterion. As a basis for predicting common secondary structure, the accuracy of the present method is comparable to or rather higher than those of the current leading methods such as RNA Sampler. The X-INS-i framework can be used for building a multiple RNA alignment from any combination of algorithms for pairwise RNA alignment and base-pairing probability. The source code is available at the webpage found in the Availability and requirements section.

  3. Development of an improved RT-LAMP assay for detection of currently circulating rubella viruses.

    Science.gov (United States)

    Abo, H; Okamoto, K; Anraku, M; Otsuki, N; Sakata, M; Icenogle, J; Zheng, Q; Kurata, T; Kase, T; Komase, K; Takeda, M; Mori, Y

    2014-10-01

    Rubella virus is the causative agent of rubella. The symptoms are usually mild, and characterized by a maculopapular rash and fever. However, rubella infection in pregnant women sometimes can result in the birth of infants with congenital rubella syndrome (CRS). Global efforts have been made to reduce and eliminate CRS. Although a reverse transcription-loop-mediated isothermal amplification (RT-LAMP) assay for detection of rubella virus has been reported, the primers contained several mismatched nucleotides with the genomes of currently circulating rubella virus strains. In the present study, a new RT-LAMP assay was established. The detection limit of this assay was 100-1000PFU/reaction of viruses for all rubella genotypes, except for genotype 2C, which is not commonly found in the current era. Therefore, the new RT-LAMP assay can successfully detect all current rubella virus genotypes, and does not require sophisticated devices like TaqMan real-time PCR systems. This assay should be a useful assay for laboratory diagnosis of rubella and CRS.

  4. Can physiological endpoints improve the sensitivity of assays with plants in the risk assessment of contaminated soils?

    Directory of Open Access Journals (Sweden)

    Ana Gavina

    Full Text Available Site-specific risk assessment of contaminated areas indicates prior areas for intervention, and provides helpful information for risk managers. This study was conducted in the Ervedosa mine area (Bragança, Portugal, where both underground and open pit exploration of tin and arsenic minerals were performed for about one century (1857-1969. We aimed at obtaining ecotoxicological information with terrestrial and aquatic plant species to integrate in the risk assessment of this mine area. Further we also intended to evaluate if the assessment of other parameters, in standard assays with terrestrial plants, can improve the identification of phytotoxic soils. For this purpose, soil samples were collected on 16 sampling sites distributed along four transects, defined within the mine area, and in one reference site. General soil physical and chemical parameters, total and extractable metal contents were analyzed. Assays were performed for soil elutriates and for the whole soil matrix following standard guidelines for growth inhibition assay with Lemna minor and emergence and seedling growth assay with Zea mays. At the end of the Z. mays assay, relative water content, membrane permeability, leaf area, content of photosynthetic pigments (chlorophylls and carotenoids, malondialdehyde levels, proline content, and chlorophyll fluorescence (Fv/Fm and ΦPSII parameters were evaluated. In general, the soils near the exploration area revealed high levels of Al, Mn, Fe and Cu. Almost all the soils from transepts C, D and F presented total concentrations of arsenic well above soils screening benchmark values available. Elutriates of several soils from sampling sites near the exploration and ore treatment areas were toxic to L. minor, suggesting that the retention function of these soils was seriously compromised. In Z. mays assay, plant performance parameters (other than those recommended by standard protocols, allowed the identification of more phytotoxic soils

  5. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    Directory of Open Access Journals (Sweden)

    Xiaoli Ding

    2009-02-01

    Full Text Available Interferometric Synthetic Aperture Radar (InSAR is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram.

  6. Indexing Large Visual Vocabulary by Randomized Dimensions Hashing for High Quantization Accuracy: Improving the Object Retrieval Quality

    Science.gov (United States)

    Yang, Heng; Wang, Qing; He, Zhoucan

    The bag-of-visual-words approach, inspired by text retrieval methods, has proven successful in achieving high performance in object retrieval on large-scale databases. A key step of these methods is the quantization stage which maps the high-dimensional image feature vectors to discriminatory visual words. In this paper, we consider the quantization step as the nearest neighbor search in large visual vocabulary, and thus proposed a randomized dimensions hashing (RDH) algorithm to efficiently index and search the large visual vocabulary. The experimental results have demonstrated that the proposed algorithm can effectively increase the quantization accuracy compared to the vocabulary tree based methods which represent the state-of-the-art. Consequently, the object retrieval performance can be significantly improved by our method in the large-scale database.

  7. General formula for on-axis sun-tracking system and its application in improving tracking accuracy of solar collector

    Energy Technology Data Exchange (ETDEWEB)

    Chong, K.K.; Wong, C.W. [Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Off Jalan Genting Kelang, Setapak, 53300 Kuala Lumpur (Malaysia)

    2009-03-15

    Azimuth-elevation and tilt-roll tracking mechanism are among the most commonly used sun-tracking methods for aiming the solar collector towards the sun at all times. It has been many decades that each of these two sun-tracking methods has its own specific sun-tracking formula and they are not interrelated. In this paper, the most general form of sun-tracking formula that embraces all the possible on-axis tracking methods is presented. The general sun-tracking formula not only can provide a general mathematical solution, but more significantly it can improve the sun-tracking accuracy by tackling the installation error of the solar collector. (author)

  8. Pooling Ocular Swab Specimens from Tanzania for testing by Roche Amplicor and Aptima Combo 2 Assays for the detection of Chlamydia trachomatis: Accuracy and Cost Savings

    Science.gov (United States)

    Dize, Laura; West, Sheila; Quinn, Thomas C.; Gaydos, Charlotte A.

    2014-01-01

    Ocular swabs collected in Tanzania were evaluated by Amplicor CT and Aptima Combo2 assays for the detection of Chlamydia trachomatis (CT) to determine if pooling could be used to reduce the cost of detection. Pooling would be an accurate method and so far resulted in a cost-savings of 62.2%. PMID:24079951

  9. Improved Activity Assay Method for Arginine Kinase Based on a Ternary Heteropolyacid System

    Institute of Scientific and Technical Information of China (English)

    陈宝玉; 郭勤; 郭智; 王希成

    2003-01-01

    This paper presents a new system for the activity assay of arginine kinase (AK), based on the spectrophotometric determination of an ascorbic acid-reduced blue ternary heteropolyacid composed of bismuth, molybdate and the released phosphate from N-phospho-L-arginine (PArg) formed in the forward catalysis reaction.The assay conditions, including the formulation of the phosphate determination reagent (PDR), the assay timing, and the linear activity range of the enzyme concentration, have been tested and optimized.For these conditions, the ternary heteropolyacid color is completely developed within 1 min and is stable for at least 15 min, with an absorbance maximum at 700 nm and a molar extinction coefficient of 15.97 (mmol/L)-1 · cm-1 for the phosphate.Standard curves for phosphate show a good linearity of 0.999.Compared with previous activity assay methods for AK, this system exhibits superior sensitivity, reproducibility, and adaptability to various conditions in enzymological studies.This method also reduces the assay time and avoids the use of some expensive instruments and reagents.

  10. Reassessment of CT images to improve diagnostic accuracy in patients with suspected acute appendicitis and an equivocal preoperative CT interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyun Cheol; Yang, Dal Mo; Kim, Sang Won [Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Department of Radiology, Seoul (Korea, Republic of); Park, Seong Jin [Kyung Hee University Hospital, College of Medicine, Kyung Hee University, Department of Radiology, Seoul (Korea, Republic of)

    2012-06-15

    To identify CT features that discriminate individuals with and without acute appendicitis in patients with equivocal CT findings, and to assess whether knowledge of these findings improves diagnostic accuracy. 53 patients that underwent appendectomy with an indeterminate preoperative CT interpretation were selected and allocated to an acute appendicitis group or a non-appendicitis group. The 53 CT examinations were reviewed by two radiologists in consensus to identify CT findings that could aid in the discrimination of those with and without appendicitis. In addition, two additional radiologists were then requested to evaluate independently the 53 CT examinations using a 4-point scale, both before and after being informed of the potentially discriminating criteria. CT findings found to be significantly different in the two groups were; the presence of appendiceal wall enhancement, intraluminal air in appendix, a coexistent inflammatory lesion, and appendiceal wall thickening (P < 0.05). Areas under the curves of reviewers 1 and 2 significantly increased from 0.516 and 0.706 to 0.677 and 0.841, respectively, when reviewers were told which CT variables were significant (P = 0.0193 and P = 0.0397, respectively). Knowledge of the identified CT findings was found to improve diagnostic accuracy for acute appendicitis in patients with equivocal CT findings. circle Numerous patients with clinically equivocal appendicitis do not have acute appendicitis circle Computed tomography (CT) helps to reduce the negative appendectomy rate circle CT is not always infallible and may also demonstrate indeterminate findings circle However knowledge of significant CT variables can further reduce negative appendectomy rate circle An equivocal CT interpretation of appendicitis should be reassessed with this knowledge. (orig.)

  11. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  12. Diagnostic Accuracy of GeneXpert MTB/RIF Assay in Comparison to Conventional Drug Susceptibility Testing Method for the Diagnosis of Multidrug-Resistant Tuberculosis

    Science.gov (United States)

    Pandey, Pratikshya; Rijal, Komal Raj; Shrestha, Bhawana; Kattel, Sirita; Banjara, Megha Raj; Maharjan, Bhagwan; KC, Rajendra

    2017-01-01

    Xpert MTB/RIF assay is regarded as a great achievement of modern medicine for the rapid diagnosis of multidrug-resistant tuberculosis (MDR-TB). The main purpose of this study was to determine the performance of Xpert MTB/RIF assay compared to conventional drug susceptibility testing (DST) method for the diagnosis of MDR-TB. A comparative cross sectional study was carried out at German-Nepal Tuberculosis Project, Kathmandu, Nepal, from April 2014 to September 2014. A total of 88 culture positive clinical samples (83 pulmonary and 5 extra-pulmonary) received during the study period were analyzed for detection of multidrug-resistant tuberculosis by both GeneXpert MTB/RIF assay and conventional DST method. McNemar chi square test was used to compare the performance of Xpert with that of DST method. A p-value of less than 0.05 was considered as statistically significant. Of total 88 culture positive samples, one was reported as invalid while 2 were found to contain nontuberculous Mycobacteria (NTM). Among remaining 85 Mycobacterium tuberculosis culture positive samples, 69 were found to be MDR-TB positive by both methods. The overall sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of GeneXpert MTB/RIF assay were found to be 98.6%, 100%, 100% and 93.8% respectively. Statistically, there was no significant difference between the diagnostic performance of Xpert and conventional DST method for detection of MDR-TB. GeneXpert MTB/RIF assay was found to be highly sensitive, specific and comparable to gold standard conventional DST method for the diagnosis of MDR-TB. PMID:28081227

  13. Improving the accuracy of estimates of animal path and travel distance using GPS drift-corrected dead reckoning.

    Science.gov (United States)

    Dewhirst, Oliver P; Evans, Hannah K; Roskilly, Kyle; Harvey, Richard J; Hubel, Tatjana Y; Wilson, Alan M

    2016-09-01

    Route taken and distance travelled are important parameters for studies of animal locomotion. They are often measured using a collar equipped with GPS. Collar weight restrictions limit battery size, which leads to a compromise between collar operating life and GPS fix rate. In studies that rely on linear interpolation between intermittent GPS fixes, path tortuosity will often lead to inaccurate path and distance travelled estimates. Here, we investigate whether GPS-corrected dead reckoning can improve the accuracy of localization and distance travelled estimates while maximizing collar operating life. Custom-built tracking collars were deployed on nine freely exercising domestic dogs to collect high fix rate GPS data. Simulations were carried out to measure the extent to which combining accelerometer-based speed and magnetometer heading estimates (dead reckoning) with low fix rate GPS drift correction could improve the accuracy of path and distance travelled estimates. In our study, median 2-dimensional root-mean-squared (2D-RMS) position error was between 158 and 463 m (median path length 16.43 km) and distance travelled was underestimated by between 30% and 64% when a GPS position fix was taken every 5 min. Dead reckoning with GPS drift correction (1 GPS fix every 5 min) reduced 2D-RMS position error to between 15 and 38 m and distance travelled to between an underestimation of 2% and an overestimation of 5%. Achieving this accuracy from GPS alone would require approximately 12 fixes every minute and result in a battery life of approximately 11 days; dead reckoning reduces the number of fixes required, enabling a collar life of approximately 10 months. Our results are generally applicable to GPS-based tracking studies of quadrupedal animals and could be applied to studies of energetics, behavioral ecology, and locomotion. This low-cost approach overcomes the limitation of low fix rate GPS and enables the long-term deployment of lightweight GPS collars.

  14. Application of immunomagnetic particles to enzyme-linked immunosorbent assay (ELISA) for improvement of detection sensitivity of HCG.

    Science.gov (United States)

    Kuo, Hsiao-Ting; Yeh, Jay Z; Wu, Po-Hua; Jiang, Chii-Ming; Wu, Ming-Chang

    2012-01-01

    This investigation was aimed at using superparamagnetic particles to enzyme-linked immunosorbent assay (SPIO-ELISA) of human chorionic gonadotropin (hCG) to enhance detection sensitivity of hCG. We found that N-(3-dimethyl aminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC) was the best cross-linking reagent to link anti hCG α antibody to superparamagnetic particle (SPIO-anti hCG α antibody immunomagnetic particle). To improve the specificity of the assay, a horse radish peroxidase (HRP)-labeled anti-hCG beta monoclonal antibody was used to detect captured hCG using double antibody sandwich ELISA assay. SPIO-ELISA application to determine hCG increased the sensitivity to 1 mIU/mL, which is a level of sensitivity enabling the diagnosis of pregnancy during the early gestational period.

  15. Ultrasonication of pyrogenic microorganisms improves the detection of pyrogens in the Mono Mac 6 assay

    DEFF Research Database (Denmark)

    Moesby, Lise; Hansen, E W; Christensen, J D

    2000-01-01

    of the assay. The interleukin-6 inducing capacity of a broad spectrum of UV-killed and ultrasonicated microorganisms is examined in Mono Mac 6 cells. The interleukin-6 secretion is determined in a sandwich immunoassay (DELFIA). The Mono Mac 6 assay is able to detect UV-killed Bacillus subtilis, Staphylococcus...... aureus and Salmonella typhimurium, but neither Candida albicans nor Aspergillus niger. After ultrasonication of the microorganisms it is possible to detect C. albicans and A. niger. The interleukin-6 inducing ability of the examined microorganisms is in no case reduced after ultrasonic treatment. However...

  16. Bovine Tuberculosis: Analyzing the Parameters of the Interferon Gamma Assay and Improved Diagnosis with New Antigens

    Science.gov (United States)

    Bovine tuberculosis (TB), a zoonotic disease with a major economic impact, continues to be a significant problem with a global perspective. The BOVIGAM® interferon gamma (IFN-gamma) assay constitutes a laboratory-based tuberculosis test and is widely used complementary to the tuberculin skin test....

  17. Application of an Improved Enzyme-Linked Immunosorbent Assay Method for Serological Diagnosis of Canine Leishmaniasis

    NARCIS (Netherlands)

    N. Santarem; R. Silvestre; L. Cardoso; H. Schallig; S.G. Reed; A. Cordeiro-da-Silva

    2010-01-01

    Accurate diagnosis of canine leishmaniasis (CanL) is essential toward a more efficient control of this zoonosis, but it remains problematic due to the high incidence of asymptomatic infections. In this study, we present data on the development of enzyme-linked immunosorbent assay (ELISA)-based techn

  18. Improvements in dose accuracy delivered with static-MLC IMRT on an integrated linear accelerator control system

    Energy Technology Data Exchange (ETDEWEB)

    Li Ji; Wiersma, Rodney D.; Stepaniak, Christopher J.; Farrey, Karl J.; Al-Hallaq, Hania A. [Department of Radiation and Cellular Oncology, University of Chicago, 5758 South Maryland Avenue, MC9006, Chicago, Illinois 60637 (United States)

    2012-05-15

    Trilogy and the TrueBeam up to 10 MU/segment, at all dose rates greater than 100 MU/min. The linear trend of decreasing dose accuracy as a function of increasing dose rate on the Trilogy is no longer apparent on TrueBeam, even for dose rates as high as 2400 MU/min. Dose inaccuracy averaged over all ten segments in each beam delivery sequence was larger for Trilogy than TrueBeam, with the largest discrepancy (0.2% vs 3%) occurring for 1 MU/segment beams at both 300 and 600 MU/min. Conclusions: Earlier generations of Varian LINACs exhibited large dose variations for small MU segments in SMLC-IMRT delivery. Our results confirmed these findings. The dose delivery accuracy for SMLC-IMRT is significantly improved on TrueBeam compared to Trilogy for every combination of low MU/segment (1-10) and high dose rate (200-600 MU/min), in part due to the faster sampling rate (100 vs 20 Hz) and enhanced electronic integration of the MLC controller with the LINAC. SMLC-IMRT can be implemented on TrueBeam with higher dose accuracy per beam ({+-}0.2% vs {+-}3%) than previous generations of Varian C-series LINACs for 1 MU/segment delivered at 600 MU/min).

  19. Evaluating Landsat 8 Satellite Sensor Data for Improved Vegetation Mapping Accuracy of the New Hampshire Coastal Watershed Area

    Science.gov (United States)

    Ledoux, Lindsay

    the previous Landsat sensor (Landsat 7). Once classification had been performed, traditional and area-based accuracy assessments were implemented. Comparison measures were also calculated (i.e. Kappa, Z test statistic). The results from this study indicate that, while using Landsat 8 imagery is useful, the additional spectral bands provided in the Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) do not provide an improvement in vegetation classification accuracy in this study.

  20. Applying machine learning approaches to improving the accuracy of breast-tumour diagnosis via fine needle aspiration

    Institute of Scientific and Technical Information of China (English)

    YUAN Qian-fei; CAI Cong-zhong; XIAO Han-guang; LIU Xing-hua

    2007-01-01

    Diagnosis and treatment of breast cancer have been improved during the last decade; however, breast cancer is still a leading cause of death among women in the whole world. Early detection and accurate diagnosis of this disease has been demonstrated an approach to long survival of the patients. As an attempt to develop a reliable diagnosing method for breast cancer, we integrated support vector machine (SVM), k-nearest neighbor and probabilistic neural network into a complex machine learning approach to detect malignant breast tumour through a set of indicators consisting of age and ten cellular features of fine-needle aspiration of breast which were ranked according to signal-to-noise ratio to identify determinants distinguishing benign breast tumours from malignant ones. The method turned out to significantly improve the diagnosis, with a sensitivity of 94.04%, a specificity of 97.37%, and an overall accuracy up to 96.24% when SVM was adopted with the sigmoid kernel function under 5-fold cross validation. The results suggest that SVM is a promising methodology to be further developed into a practical adjunct implement to help discerning benign and malignant breast tumours and thus reduce the incidence of misdiagnosis.

  1. Improving the accuracy of low level quantum chemical calculation for absorption energies: the genetic algorithm and neural network approach.

    Science.gov (United States)

    Gao, Ting; Shi, Li-Li; Li, Hai-Bin; Zhao, Shan-Shan; Li, Hui; Sun, Shi-Ling; Su, Zhong-Min; Lu, Ying-Hua

    2009-07-07

    The combination of genetic algorithm and back-propagation neural network correction approaches (GABP) has successfully improved the calculation accuracy of absorption energies. In this paper, the absorption energies of 160 organic molecules are corrected to test this method. Firstly, the GABP1 is introduced to determine the quantitative relationship between the experimental results and calculations obtained by using quantum chemical methods. After GABP1 correction, the root-mean-square (RMS) deviations of the calculated absorption energies reduce from 0.32, 0.95 and 0.46 eV to 0.14, 0.19 and 0.18 eV for B3LYP/6-31G(d), B3LYP/STO-3G and ZINDO methods, respectively. The corrected results of B3LYP/6-31G(d)-GABP1 are in good agreement with experimental results. Then, the GABP2 is introduced to determine the quantitative relationship between the results of B3LYP/6-31G(d)-GABP1 method and calculations of the low accuracy methods (B3LYP/STO-3G and ZINDO). After GABP2 correction, the RMS deviations of the calculated absorption energies reduce to 0.20 and 0.19 eV for B3LYP/STO-3G and ZINDO methods, respectively. The results show that the RMS deviations after GABP1 and GABP2 correction are similar for B3LYP/STO-3G and ZINDO methods. Thus, the B3LYP/6-31G(d)-GABP1 is a better method to predict absorption energies and can be used as the approximation of experimental results where the experimental results are unknown or uncertain by experimental method. This method may be used for predicting absorption energies of larger organic molecules that are unavailable by experimental methods and by high-accuracy theoretical methods with larger basis sets. The performance of this method was demonstrated by application to the absorption energy of the aldehyde carbazole precursor.

  2. Antisense sequencing improves the accuracy and precision of A-to-I editing measurements using the peak height ratio method

    Directory of Open Access Journals (Sweden)

    Rinkevich Frank D

    2012-01-01

    Full Text Available Abstract Background A-to-I RNA editing is found in all phyla of animals and contributes to transcript diversity that may have profound impacts on behavior and physiology. Many transcripts of genes involved in axonal conductance, synaptic transmission and modulation are the targets of A-to-I RNA editing. There are a number of methods to measure the extent of A-to-I RNA editing, but they are generally costly and time consuming. One way to determine the frequency of A-to-I RNA editing is the peak height ratio method, which compares the size of peaks on electropherograms that represent unedited and edited sites. Findings Sequencing of 4 editing sites of the Dα6 nicotinic acetylcholine receptor subunit with an antisense primer (which uses T/C peaks to measure unedited and edited sites, respectively showed very accurate and precise measurements of A-to-I RNA editing. The accuracy and precision were excellent for all editing sites, including those edited with high or low frequencies. The frequency of A-to-I RNA editing was comparable to the editing frequency as measured by clone counting from the same sample. Sequencing these same sites with the sense primer (which uses A/G peaks yielded inaccurate and imprecise measurements. Conclusions We have validated and improved the accuracy and precision of the peak height ratio method to measure the frequency of A-to-I RNA editing, and shown that results are primer specific. Thus, the correct sequencing primer must be utilized for the most dependable data. When compared to other methods used to measure the frequency of A-to-I RNA editing, the major benefits of the peak height ratio are that this method is inexpensive, fast, non-labor intensive and easily adaptable to many laboratory and field settings.

  3. Deriving bio-equivalents from in vitro bioassays: assessment of existing uncertainties and strategies to improve accuracy and reporting.

    Science.gov (United States)

    Wagner, Martin; Vermeirssen, Etiënne L M; Buchinger, Sebastian; Behr, Maximilian; Magdeburg, Axel; Oehlmann, Jörg

    2013-08-01

    Bio-equivalents (e.g., 17β-estradiol or dioxin equivalents) are commonly employed to quantify the in vitro effects of complex human or environmental samples. However, there is no generally accepted data analysis strategy for estimating and reporting bio-equivalents. Therefore, the aims of the present study are to 1) identify common mathematical models for the derivation of bio-equivalents from the literature, 2) assess the ability of those models to correctly predict bio-equivalents, and 3) propose measures to reduce uncertainty in their calculation and reporting. We compiled a database of 234 publications that report bio-equivalents. From the database, we extracted 3 data analysis strategies commonly used to estimate bio-equivalents. These models are based on linear or nonlinear interpolation, and the comparison of effect concentrations (ECX ). To assess their accuracy, we employed simulated data sets in different scenarios. The results indicate that all models lead to a considerable misestimation of bio-equivalents if certain mathematical assumptions (e.g., goodness of fit, parallelism of dose-response curves) are violated. However, nonlinear interpolation is most suitable to predict bio-equivalents from single-point estimates. Regardless of the model, subsequent linear extrapolation of bio-equivalents generates additional inaccuracy if the prerequisite of parallel dose-response curves is not met. When all these factors are taken into consideration, it becomes clear that data analysis introduces considerable uncertainty in the derived bio-equivalents. To improve accuracy and transparency of bio-equivalents, we propose a novel data analysis strategy and a checklist for reporting Minimum Information about Bio-equivalent ESTimates (MIBEST).

  4. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Prashant, E-mail: p.patel@bham.ac.uk [University of Birmingham, School of Cancer Sciences, Vincent Drive (United Kingdom); Rangarajan, Balaji; Mangat, Kamarjit, E-mail: kamarjit.mangat@uhb.nhs.uk, E-mail: kamarjit.mangat@nhs.net [University Hospital Birmingham NHS Trust, Department of Radiology (United Kingdom)

    2015-08-15

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree.

  5. Combined hepatitis C virus (HCV) antigen-antibody detection assay does not improve diagnosis for seronegative individuals with occult HCV infection.

    Science.gov (United States)

    Quiroga, Juan A; Castillo, Inmaculada; Pardo, Margarita; Rodríguez-Iñigo, Elena; Carreño, Vicente

    2006-12-01

    A combined hepatitis C virus (HCV) antigen-antibody assay was evaluated for 115 seronegative individuals with occult HCV infection. The assay was reactive in one patient and negative to weakly reactive in three others (all four gave indeterminate results by supplemental assay) but failed to detect HCV in the remaining patients. Despite increased sensitivity the combined assay does not improve serodiagnosis of occult HCV infection.

  6. Combined Hepatitis C Virus (HCV) Antigen-Antibody Detection Assay Does Not Improve Diagnosis for Seronegative Individuals with Occult HCV Infection▿

    OpenAIRE

    Quiroga, Juan A.; Castillo, Inmaculada; Pardo, Margarita; Rodríguez-Iñigo, Elena; CARREÑO, VICENTE

    2006-01-01

    A combined hepatitis C virus (HCV) antigen-antibody assay was evaluated for 115 seronegative individuals with occult HCV infection. The assay was reactive in one patient and negative to weakly reactive in three others (all four gave indeterminate results by supplemental assay) but failed to detect HCV in the remaining patients. Despite increased sensitivity the combined assay does not improve serodiagnosis of occult HCV infection.

  7. Influence of Vitamin D Binding Protein on Accuracy of 25-Hydroxyvitamin D Measurement Using the ADVIA Centaur Vitamin D Total Assay

    Directory of Open Access Journals (Sweden)

    James Freeman

    2014-01-01

    Full Text Available Vitamin D status in different populations relies on accurate measurement of total serum 25-hydroxyvitamin D [25(OHD] concentrations [i.e., 25(OHD3 and 25(OHD2]. This study evaluated agreement between the ADVIA Centaur Vitamin D Total assay for 25(OHD testing (traceable to the NIST-Ghent reference method procedure and a liquid chromatography tandem mass spectrometry (LC-MS/MS method for various populations with different levels of vitamin D binding protein (DBP. Total serum 25(OHD concentrations were measured for 36 pregnant women, 40 hemodialysis patients, and 30 samples (DBP-spiked or not from healthy subjects. ELISA measured DBP levels. The mean serum DBP concentrations were higher for pregnancy (415 μg/mL and lower for hemodialysis subjects (198 μg/mL than for healthy subjects and were highest for spiked serum (545 μg/mL. The average bias between the ADVIA Centaur assay and the LC-MS/MS method was −1.4% (healthy, −6.1% (pregnancy, and 4.4% (hemodialysis. The slightly greater bias for samples from some pregnancy and hemodialysis subjects with serum DBP levels outside of the normal healthy range fell within a clinically acceptable range—reflected by analysis of their low-range (≤136 μg/mL, medium-range (137–559 μg/mL, and high-range (≥560 μg/mL DBP groups. Thus, the ADVIA Centaur Vitamin D Total assay demonstrates acceptable performance compared with an LC-MS/MS method for populations containing different amounts of DBP.

  8. Dissolution Behavior and Content Uniformity of An Improved Tablet Formulation Assayed by Spectrofluorometric and RIA Methods

    Directory of Open Access Journals (Sweden)

    Morteza Rafiee-Tehrani

    1990-06-01

    Full Text Available Digoxin 0.25 mg tablets were manufactured by pregranulation of lactose-fcorn starch with 10% corn starch paste and deposition of solvent on pregranules to make digoxin granules. In the preparation of tablet A, granules of lactose-corn Starch was uniformly moistened with a 5% chloroform-ethanol solution (2:lv/vof digoxin by a simple blending. Tablet B was produced by spray granulation system on which the solvent was sprayed on the granules of lactose-corn starch by utilization of a laboratory size fluidized bed drier (Uniglatt . The content uniformity and dissolution of both tablets were determined by the spectrofluorometric and radio¬immunoassay (RIA method modified for the assay of tablet solutious. One available commercially brand of digoxin tablet (C was included in dissolution study for comparison. For the spectrofluorometric method the technique is based on the fluor-ometric measurenent of the dehydration product of the cardiotonic steroid resulting from its reaction with hydrogen peroxide in concentrated hydrochloric acid. For the RIA method, the filtrate was diluted to theoretical concentration of 2.5 ng/ml."nAliquots of this dilution were then assayed for digoxin content using a commercial digoxin125 I RIA kit. Results from both assay methods were extrapolated to the total tablet content and compared with the labeled amount of 20 individual tablets. All tablet assay results were within the USP standards for the content uniformity and"ndissolution of individual. The individual tablet deviations from labeled amount by RIA method were smaller when compared with the spectrofluorometric method.There was no significant difference between the release of digoxin from three products, and thus it is suggested that the Procedure B could be easily applied for manufacturing"nof digoxin tablets in industrial scales.It was also concluded that,the RIA method could be used for the digoxin tablet determination.

  9. Improved PCR assay for the species-specific identification and quantitation of Legionella pneumophila in water.

    Science.gov (United States)

    Cho, Min Seok; Ahn, Tae-Young; Joh, Kiseong; Lee, Eui Seok; Park, Dong Suk

    2015-11-01

    Legionellosis outbreak is a major global health care problem. However, current Legionella risk assessments may be compromised by uncertainties in Legionella detection methods, infectious dose, and strain infectivity. These limitations may place public health at significant risk, leading to significant monetary losses in health care. However, there are still unmet needs for its rapid identification and monitoring of legionellae in water systems. Therefore, in the present study, a primer set was designed based on a LysR-type transcriptional regulator (LTTR) family protein gene of Legionella pneumophila subsp. pneumophila str. Philadelphia 1 because it was found that this gene is structurally diverse among species through BLAST searches. The specificity of the primer set was evaluated using genomic DNA from 6 strains of L. pneumophila, 5 type strains of other related Legionella species, and other 29 reference pathogenic bacteria. The primer set used in the PCR assay amplified a 264-bp product for only targeted six strains of L. pneumophila. The assay was also able to detect at least 1.39 × 10(3) copies/μl of cloned amplified target DNA using purified DNA or 7.4 × 10(0) colony-forming unit per reaction when using calibrated cell suspension. In addition, the sensitivity and specificity of this assay were confirmed by successful detection of Legionella pneumophila in environmental water samples.

  10. Improved peroxyl radical scavenging TOSC assay to quantify antioxidant capacity using SIFT-MS.

    Science.gov (United States)

    Senthilmohan, Senti T; Davis, Brett M; Wilson, Paul F; McEwan, Murray J

    2009-01-01

    We report a new, fast, sensitive variation of the total oxyradical scavenging capacity (TOSC) assay for measuring the antioxidant capacity of pure compounds, plant extracts and biological fluids using selected ion flow tube mass spectrometry (SIFT-MS). The TOSC assay examines the partial inhibition of ethene formation in the presence of antioxidants that compete with alpha-keto-gamma-methiolbutyric acid (KMBA) for reactive oxygen species. The SIFT-MS-TOSC assay takes 15 s for each ethene analysis and the time interval between consecutive analyses is 20 s. We demonstrate the method by monitoring the antioxidant capacity of several standard radical scavengers of peroxyl radicals. For peroxyl radicals the measured SIFT-MS-TOSC concentrations necessary to produce 50% inhibition of radical reaction with KMBA are 6.1 +/- 0.3 microM for Trolox, 5.7 +/- 0.3 microM for ascorbic acid, 8.4 +/- 0.4 microM for uric acid and 38 +/- 2 microM for reduced glutathione.

  11. Temporary shielding of hot spots in the drainage areas of cutaneous melanoma improves accuracy of lymphoscintigraphic sentinel lymph node diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Maza, S.; Valencia, R.; Geworski, L.; Zander, A.; Munz, D.L. [Clinic for Nuclear Medicine, University Hospital Charite, Humboldt University of Berlin, Schumannstrasse 20-21, 10117 Berlin (Germany); Draeger, E.; Winter, H.; Sterry, W. [Clinic for Dermatology, Venereology and Allergology, University Hospital Charite, Humboldt University of Berlin, Berlin (Germany)

    2002-10-01

    Detection of the ''true'' sentinel lymph nodes, permitting correct staging of regional lymph nodes, is essential for management and prognostic assessment in malignant melanoma. In this study, it was prospectively evaluated whether simple temporary shielding of hot spots in lymphatic drainage areas could improve the accuracy of sentinel lymph node diagnostics. In 100 consecutive malignant melanoma patients (45 women, 55 men; age 11-91 years), dynamic and static lymphoscintigraphy in various views was performed after strict intracutaneous application of technetium-99m nanocolloid (40-150 MBq; 0.05 ml/deposit) around the tumour (31 patients) or the biopsy scar (69 patients, safety distance 1 cm). The images were acquired with and without temporary lead shielding of the most prominent hot spots in the drainage area. In 33/100 patients, one or two additional sentinel lymph nodes that showed less tracer accumulation or were smaller (<1.5 cm) were detected after shielding. Four of these patients had metastases in the sentinel lymph nodes; the non-sentinel lymph nodes were tumour negative. In 3/100 patients, hot spots in the drainage area proved to be lymph vessels, lymph vessel intersections or lymph vessel ectasias after temporary shielding; hence, a node interpreted as a non-sentinel lymph node at first glance proved to be the real sentinel lymph node. In two of these patients, lymph node metastasis was histologically confirmed; the non-sentinel lymph nodes were tumour free. In 7/100 patients the exact course of lymph vessels could be mapped after shielding. In one of these patients, two additional sentinel lymph nodes (with metastasis) were detected. Overall, in 43/100 patients the temporary shielding yielded additional information, with sentinel lymph node metastases in 7%. In conclusion, when used in combination with dynamic acquisition in various views, temporary shielding of prominent hot spots in the drainage area of a malignant melanoma of the

  12. The Diagnostic Accuracy of the M2 Pyruvate Kinase Quick Stool Test--A Rapid Office Based Assay Test for the Detection of Colorectal Cancer.

    Directory of Open Access Journals (Sweden)

    Suresh Sithambaram

    Full Text Available M2 pyruvate kinase (M2PK is an oncoprotein secreted by colorectal cancers in stools. This the first report on the accuracy of a rapid stool test in the detection of colorectal cancer (CRC.To determine the sensitivity, specificity and positive and negative predictive value of a rapid, point of care stool test M2 PK- the M2PK Quick.Consecutive cases of endoscopically diagnosed and histological proven CRC were recruited. Stools were collected by patients and tested with the immunochromatographic M2PK Quick Test (Schebo Biotech AC, Giessen, Germany. Controls were consecutively chosen from patients without any significant colorectal or gastrointestinal disease undergoing colonoscopy. CRC was staged according to the AJCC staging manual (7th Edition and location of tumor defined as proximal or distal.The sensitivity, specificity, positive predictive value, negative predictive value and overall accuracy were: 93%, 97.5%, 94.9%, 96.5% and 96.0% respectively. The positive predictive value for proximal tumors was significantly lower compared to distal tumors. No differences were seen between the different stages of the tumor.The M2-PK Quick, rapid, point-of-care test is a highly accurate test in the detection of CRC. It is easy and convenient to perform and a useful diagnostic test for the detection of CRC in a clinical practice setting.

  13. Reflections on Improving the Accuracy of Weather Forecasts%关于提高天气预报准确率的思考

    Institute of Scientific and Technical Information of China (English)

    李学欣

    2014-01-01

    The weather forecast meteorological services in the most basic work. Analyzes the importance of weather forecast accuracy and factors affecting the accuracy of weather forecasts, made several on improving the accuracy of weather forecasts measures for reference.%天气预报是气象服务业中最基础的工作。分析了天气预报准确率的重要性和影响天气预报准确率的因素,提出了几点关于提高天气预报准确率的措施,以供参考。

  14. Novel molecular and computational methods improve the accuracy of insertion site analysis in Sleeping Beauty-induced tumors.

    Directory of Open Access Journals (Sweden)

    Benjamin T Brett

    Full Text Available The recent development of the Sleeping Beauty (SB system has led to the development of novel mouse models of cancer. Unlike spontaneous models, SB causes cancer through the action of mutagenic transposons that are mobilized in the genomes of somatic cells to induce mutations in cancer genes. While previous methods have successfully identified many transposon-tagged mutations in SB-induced tumors, limitations in DNA sequencing technology have prevented a comprehensive analysis of large tumor cohorts. Here we describe a novel method for producing genetic profiles of SB-induced tumors using Illumina sequencing. This method has dramatically increased the number of transposon-induced mutations identified in each tumor sample to reveal a level of genetic complexity much greater than previously appreciated. In addition, Illumina sequencing has allowed us to more precisely determine the depth of sequencing required to obtain a reproducible signature of transposon-induced mutations within tumor samples. The use of Illumina sequencing to characterize SB-induced tumors should significantly reduce sampling error that undoubtedly occurs using previous sequencing methods. As a consequence, the improved accuracy and precision provided by this method will allow candidate cancer genes to be identified with greater confidence. Overall, this method will facilitate ongoing efforts to decipher the genetic complexity of the human cancer genome by providing more accurate comparative information from Sleeping Beauty models of cancer.

  15. Improving forecasting accuracy of medium and long-term runoff using artificial neural network based on EEMD decomposition.

    Science.gov (United States)

    Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo

    2015-05-01

    Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting.

  16. Does gadolinium-based contrast material improve diagnostic accuracy of local invasion in rectal cancer MRI? A multireader study.

    Science.gov (United States)

    Gollub, Marc J; Lakhman, Yulia; McGinty, Katrina; Weiser, Martin R; Sohn, Michael; Zheng, Junting; Shia, Jinru

    2015-02-01

    OBJECTIVE. The purpose of this study was to compare reader accuracy and agreement on rectal MRI with and without gadolinium administration in the detection of T4 rectal cancer. MATERIALS AND METHODS. In this study, two radiologists and one fellow independently interpreted all posttreatment MRI studies for patients with locally advanced or recurrent rectal cancer using unenhanced images alone or combined with contrast-enhanced images, with a minimum interval of 4 weeks. Readers evaluated involvement of surrounding structures on a 5-point scale and were blinded to pathology and disease stage. Sensitivity, specificity, negative predictive value, positive predictive value, and AUC were calculated and kappa statistics were used to describe interreader agreement. RESULTS. Seventy-two patients (38 men and 34 women) with a mean age of 61 years (range, 32-86 years) were evaluated. Fifteen patients had 32 organs invaded. Global AUCs without and with gadolinium administration were 0.79 and 0.77, 0.91 and 0.86, and 0.83 and 0.78 for readers 1, 2, and 3, respectively. AUCs before and after gadolinium administration were similar. Kappa values before and after gadolinium administration for pairs of readers ranged from 0.5 to 0.7. CONCLUSION. On the basis of pathology as a reference standard, the use of gadolinium during rectal MRI did not significantly improve radiologists' agreement or ability to detect T4 disease.

  17. Investigation of polymerase chain reaction assays to improve detection of bacterial involvement in bovine respiratory disease.

    Science.gov (United States)

    Bell, Colin J; Blackburn, Paul; Elliott, Mark; Patterson, Tony I A P; Ellison, Sean; Lahuerta-Marin, Angela; Ball, Hywel J

    2014-09-01

    Bovine respiratory disease (BRD) causes severe economic losses to the cattle farming industry worldwide. The major bacterial organisms contributing to the BRD complex are Mannheimia haemolytica, Histophilus somni, Mycoplasma bovis, Pasteurella multocida, and Trueperella pyogenes. The postmortem detection of these organisms in pneumonic lung tissue is generally conducted using standard culture-based techniques where the presence of therapeutic antibiotics in the tissue can inhibit bacterial isolation. In the current study, conventional and real-time polymerase chain reaction (PCR) assays were used to assess the prevalence of these 5 organisms in grossly pneumonic lung samples from 150 animals submitted for postmortem examination, and the results were compared with those obtained using culture techniques. Mannheimia haemolytica was detected in 51 cases (34%) by PCR and in 33 cases (22%) by culture, H. somni was detected in 35 cases (23.3%) by PCR and in 6 cases (4%) by culture, Myc. bovis was detected in 53 cases (35.3%) by PCR and in 29 cases (19.3%) by culture, P. multocida was detected in 50 cases (33.3%) by PCR and in 31 cases (20.7%) by culture, and T. pyogenes was detected in 42 cases (28%) by PCR and in 31 cases (20.7%) by culture, with all differences being statistically significant. The PCR assays indicated positive results for 111 cases (74%) whereas 82 cases (54.6%) were culture positive. The PCR assays have demonstrated a significantly higher rate of detection of all 5 organisms in cases of pneumonia in cattle in Northern Ireland than was detected by current standard procedures.

  18. Improving the accuracies of bathymetric models based on multiple regression for calibration (case study: Sarca River, Italy)

    Science.gov (United States)

    Niroumand-Jadidi, Milad; Vitti, Alfonso

    2016-10-01

    The optical imagery has the potential for extraction of spatially and temporally explicit bathymetric information in inland/coastal waters. Lyzenga's model and optimal band ratio analysis (OBRA) are main bathymetric models which both provide linear relations with water depths. The former model is sensitive and the latter is quite robust to substrate variability. The simple regression is the widely used approach for calibration of bathymetric models either Lyzenga's model or OBRA model. In this research, a multiple regression is examined for empirical calibration of the models in order to take the advantage of all spectral channels of the imagery. This method is applied on both Lyzenga's model and OBRA model for the bathymetry of a shallow Alpine river in Italy, using WorldView-2 (WV-2) and GeoEye images. Insitu depths are recorded using RTK GPS in two reaches. One-half of the data is used for calibration of models and the remaining half as independent check-points for accuracy assessment. In addition, radiative transfer model is used to simulate a set of spectra in a range of depths, substrate types, and water column properties. The simulated spectra are convolved to the sensors' spectral bands for further bathymetric analysis. Investigating the simulated spectra, it is concluded that the multiple regression improves the robustness of the Lyzenga's model with respect to the substrate variability. The improvements of multiple regression approach are much more pronounced for the Lyzenga's model rather than the OBRA model. This is in line with findings from real imagery; for instance, the multiple regression applied for calibration of Lyzenga's and OBRA models demonstrated, respectively, 22% and 9% higher determination coefficients (R2) as well as 3 cm and 1 cm better RMSEs compared to the simple regression using the WV-2 image.

  19. Improving the Accuracy of the Water Surface Cover Type in the 30 m FROM-GLC Product

    Directory of Open Access Journals (Sweden)

    Luyan Ji

    2015-10-01

    Full Text Available The finer resolution observation and monitoring of the global land cover (FROM-GLC product makes it the first 30 m resolution global land cover product from which one can extract a global water mask. However, two major types of misclassification exist with this product due to spectral similarity and spectral mixing. Mountain and cloud shadows are often incorrectly classified as water since they both have very low reflectance, while more water pixels at the boundaries of water bodies tend to be misclassified as land. In this paper, we aim to improve the accuracy of the 30 m FROM-GLC water mask by addressing those two types of errors. For the first, we adopt an object-based method by computing the topographical feature, spectral feature, and geometrical relation with cloud for every water object in the FROM-GLC water mask, and set specific rules to determine whether a water object is misclassified. For the second, we perform a local spectral unmixing using a two-endmember linear mixing model for each pixel falling in the water-land boundary zone that is 8-neighborhood connected to water-land boundary pixels. Those pixels with big enough water fractions are determined as water. The procedure is automatic. Experimental results show that the total area of inland water has been decreased by 15.83% in the new global water mask compared with the FROM-GLC water mask. Specifically, more than 30% of the FROM-GLC water objects have been relabeled as shadows, and nearly 8% of land pixels in the water-land boundary zone have been relabeled as water, whereas, on the contrary, fewer than 2% of water pixels in the same zone have been relabeled as land. As a result, both the user’s accuracy and Kappa coefficient of the new water mask (UA = 88.39%, Kappa = 0.87 have been substantially increased compared with those of the FROM-GLC product (UA = 81.97%, Kappa = 0.81.

  20. 提高测绘图纸打印精度的方法%Discussion on Improving Accuracy of Surveying and Mapping Drawing Printing

    Institute of Scientific and Technical Information of China (English)

    孔凡合; 赵卫常; 董军朝

    2011-01-01

    文中对绘图仪绘图误差进行了研究,分析了DXF文件的格式,提出了提高图纸打印精度的方法和程序设计思路,即在不能提高绘图仪精度的情况下,通过修改图形的DXF文件,对图形进行压缩或拉伸,从而来达到提高图纸精度的目的。%This paper discussed on how to improve the accuracy of drawing printing. It analyzed DXF files of the graphics, and put forward the methods of modifying the DXF files of graphics and compressing or stretching the graphics to improve the accuracy of drawing printing without improving the accuracy of plotters.

  1. Measures of improving engineering budget compiling accuracy%提高工程预算编制准确性的措施

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    The thesis analyzes the necessity of improving building engineering budget compiling accuracy,studies factors influencing engineering budget compiling accuracy,and puts forward measures of improving engineering budget compiling accuracy,such as improving compilation level, accurately calculating BOQ,being familiar with market conditions,and accurately determining engineering construction technologies and so on.%分析了提高建筑工程预算编制准确性的必要性,对影响工程预算编制准确性的因素进行了研究,提出了提高编制水平、准确计算工程量、了解市场行情、准确确定工程施工工艺等促进工程预算编制准确性的措施。

  2. Improving the reproducibility of the MCF-7 cell proliferation assay for the detection of xenoestrogens.

    Science.gov (United States)

    Payne, J; Jones, C; Lakhani, S; Kortenkamp, A

    2000-03-29

    The MCF-7 cell proliferation assay is potentially a simple and highly reproducible tool for the identification of estrogenic compounds. However, its widespread use has been complicated by the lack of a standardised protocol, resulting in considerable inter-laboratory variability. We have explored the sources of variability both in relation to cell lines and test regimens and report on optimised procedures for the identification of estrogenic agents. Two supposedly identical MCF-7 parent cell lines (designated UCL and SOP), and the BUS subline were cultured according to an existing protocol, and responses to 17-estradiol (E2) assessed. Despite yielding almost identical EC50 values, the proliferative response varied widely between cell lines from 0.98-fold over controls (UCL) to 8.9-fold (BUS) indicating major differences between them. The underlying causes may be genetic, and to assess this we used comparative genomic hybridisation (CGH), a technique which allows the detection of DNA sequence copy number changes on a genome-wide scale. Although numerous similarities existed between the different cell lines, the least oestrogen-responsive line (MCF-7/UCL) exhibited the greatest number of cytogenetic changes, many of which were not seen in MCF-7/SOP cells. We suggest that care must be taken, therefore, when choosing a cell line for MCF-7 cell-based experiments. Selecting the MCF-7/SOP line for further work, we carried out a thorough and systematic optimisation of the MCF-7 cell proliferation assay, finding that a 72-h period in oestrogen-free medium before treatment strongly influenced the cells response to E2. With 1 nM E2, proliferation increased from 1.5-fold to 6.5-fold relative to vehicle-treated controls, a response similar to that seen with MCF-7/BUS cells in the E-SCREEN protocol devised by Soto et al. With parent MCF-7 cells, other laboratories have reported only 4.5-fold increases as maximal. Here we present evidence that the choice of cell line and culture

  3. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  4. A Method for Accuracy of Genetic Evaluation by Utilization of Canadian Genetic Evaluation Information to Improve Heilongjiang Holstein Herds

    Institute of Scientific and Technical Information of China (English)

    DING Ke-wei; TAKEO Kayaba

    2004-01-01

    The objectives of this study were to set up a new genetic evaluation procedure to predict the breeding values of Holstein herds in Heilongjiang Province of China for milk and fat production by utilizing Canadian pedigree and genetic evaluation information and to compare the breeding values of the sires from different countries. The data used for evaluating young sires for the Chinese Holstein population consisted of records selected from 21 herds in HeiIongjiang Province. The first lactation records of 2 496 daughters collected in 1989 and 2000 were analyzed. A single-trait animal model including a fixed herd-year effect, random animal and residual effects was used by utilizing Canadian pedigree and genetic evaluation information of 5 126 sires released from the Canadian Dairy Network in August 2000. The BLUP procedure was used to evaluate all cattle in this study and the Estimated Breeding Values (EBV)for milk and fat production of 6 697 cattle (including 673 sires and 6 024 cows) were predicted. The genetic levels of the top 100 sires originated from different countries were compared.Unlike the BLUP procedure that is being used in conjunction with the single-trait sire model in Heilongjiang Province of China now, the genetic evaluation procedure used in this study not only can be used simultaneously to evaluate sires and cows but also increase the accuracy of evaluation due to using the relationships and genetic values of the Canadian evaluated sires with more daughters. The results showed that the new procedure was useful for genetic evaluation of dairy herds and the comparison of the breeding values of these sires imported from different countries showed that a significant genetic improvement has been achieved for milk production of the Heilongjiang Holstein dairy population by importing sires from foreign countries, especially from the United States due to the higher breeding values.

  5. msCentipede: Modeling Heterogeneity across Genomic Sites and Replicates Improves Accuracy in the Inference of Transcription Factor Binding.

    Science.gov (United States)

    Raj, Anil; Shim, Heejung; Gilad, Yoav; Pritchard, Jonathan K; Stephens, Matthew

    2015-01-01

    Understanding global gene regulation depends critically on accurate annotation of regulatory elements that are functional in a given cell type. CENTIPEDE, a powerful, probabilistic framework for identifying transcription factor binding sites from tissue-specific DNase I cleavage patterns and genomic sequence content, leverages the hypersensitivity of factor-bound chromatin and the information in the DNase I spatial cleavage profile characteristic of each DNA binding protein to accurately infer functional factor binding sites. However, the model for the spatial profile in this framework fails to account for the substantial variation in the DNase I cleavage profiles across different binding sites. Neither does it account for variation in the profiles at the same binding site across multiple replicate DNase I experiments, which are increasingly available. In this work, we introduce new methods, based on multi-scale models for inhomogeneous Poisson processes, to account for such variation in DNase I cleavage patterns both within and across binding sites. These models account for the spatial structure in the heterogeneity in DNase I cleavage patterns for each factor. Using DNase-seq measurements assayed in a lymphoblastoid cell line, we demonstrate the improved performance of this model for several transcription factors by comparing against the Chip-seq peaks for those factors. Finally, we explore the effects of DNase I sequence bias on inference of factor binding using a simple extension to our framework that allows for a more flexible background model. The proposed model can also be easily applied to paired-end ATAC-seq and DNase-seq data. msCentipede, a Python implementation of our algorithm, is available at http://rajanil.github.io/msCentipede.

  6. msCentipede: Modeling Heterogeneity across Genomic Sites and Replicates Improves Accuracy in the Inference of Transcription Factor Binding.

    Directory of Open Access Journals (Sweden)

    Anil Raj

    Full Text Available Understanding global gene regulation depends critically on accurate annotation of regulatory elements that are functional in a given cell type. CENTIPEDE, a powerful, probabilistic framework for identifying transcription factor binding sites from tissue-specific DNase I cleavage patterns and genomic sequence content, leverages the hypersensitivity of factor-bound chromatin and the information in the DNase I spatial cleavage profile characteristic of each DNA binding protein to accurately infer functional factor binding sites. However, the model for the spatial profile in this framework fails to account for the substantial variation in the DNase I cleavage profiles across different binding sites. Neither does it account for variation in the profiles at the same binding site across multiple replicate DNase I experiments, which are increasingly available. In this work, we introduce new methods, based on multi-scale models for inhomogeneous Poisson processes, to account for such variation in DNase I cleavage patterns both within and across binding sites. These models account for the spatial structure in the heterogeneity in DNase I cleavage patterns for each factor. Using DNase-seq measurements assayed in a lymphoblastoid cell line, we demonstrate the improved performance of this model for several transcription factors by comparing against the Chip-seq peaks for those factors. Finally, we explore the effects of DNase I sequence bias on inference of factor binding using a simple extension to our framework that allows for a more flexible background model. The proposed model can also be easily applied to paired-end ATAC-seq and DNase-seq data. msCentipede, a Python implementation of our algorithm, is available at http://rajanil.github.io/msCentipede.

  7. A novel immunofluorescent assay to investigate oxidative phosphorylation deficiency in mitochondrial myopathy: understanding mechanisms and improving diagnosis.

    Science.gov (United States)

    Rocha, Mariana C; Grady, John P; Grünewald, Anne; Vincent, Amy; Dobson, Philip F; Taylor, Robert W; Turnbull, Doug M; Rygiel, Karolina A

    2015-10-15

    Oxidative phosphorylation defects in human tissues are often challenging to quantify due to a mosaic pattern of deficiency. Biochemical assays are difficult to interpret due to the varying enzyme deficiency levels found in individual cells. Histochemical analysis allows semi-quantitative assessment of complex II and complex IV activities, but there is no validated histochemical assay to assess complex I activity which is frequently affected in mitochondrial pathology. To help improve the diagnosis of mitochondrial disease and to study the mechanisms underlying mitochondrial abnormalities in disease, we have developed a quadruple immunofluorescent technique enabling the quantification of key respiratory chain subunits of complexes I and IV, together with an indicator of mitochondrial mass and a cell membrane marker. This assay gives precise and objective quantification of protein abundance in large numbers of individual muscle fibres. By assessing muscle biopsies from subjects with a range of different mitochondrial genetic defects we have demonstrated that specific genotypes exhibit distinct biochemical signatures in muscle, providing evidence for the diagnostic use of the technique, as well as insight into the underlying molecular pathology. Stringent testing for reproducibility and sensitivity confirms the potential value of the technique for mechanistic studies of disease and in the evaluation of therapeutic approaches.

  8. Plasmodium serine hydroxymethyltransferase as a potential anti-malarial target: inhibition studies using improved methods for enzyme production and assay

    Directory of Open Access Journals (Sweden)

    Sopitthummakhun Kittipat

    2012-06-01

    Full Text Available Abstract Background There is an urgent need for the discovery of new anti-malarial drugs. Thus, it is essential to explore different potential new targets that are unique to the parasite or that are required for its viability in order to develop new interventions for treating the disease. Plasmodium serine hydroxymethyltransferase (SHMT, an enzyme in the dTMP synthesis cycle, is a potential target for such new drugs, but convenient methods for producing and assaying the enzyme are still lacking, hampering the ability to screen inhibitors. Methods Production of recombinant Plasmodium falciparum SHMT (PfSHMT and Plasmodium vivax SHMT (PvSHMT, using auto-induction media, were compared to those using the conventional Luria Bertani medium with isopropyl thio-β-D-galactoside (LB-IPTG induction media. Plasmodium SHMT activity, kinetic parameters, and response to inhibitors were measured spectrophotometrically by coupling the reaction to that of 5,10-methylenetetrahydrofolate dehydrogenase (MTHFD. The identity of the intermediate formed upon inactivation of Plasmodium SHMTs by thiosemicarbazide was investigated by spectrophotometry, high performance liquid chromatography (HPLC, and liquid chromatography-mass spectrometry (LC-MS. The active site environment of Plasmodium SHMT was probed based on changes in the fluorescence emission spectrum upon addition of amino acids and folate. Results Auto-induction media resulted in a two to three-fold higher yield of Pf- and PvSHMT (7.38 and 29.29 mg/L compared to that produced in cells induced in LB-IPTG media. A convenient spectrophotometric activity assay coupling Plasmodium SHMT and MTHFD gave similar kinetic parameters to those previously obtained from the anaerobic assay coupling SHMT and 5,10-methylenetetrahydrofolate reductase (MTHFR; thus demonstrating the validity of the new assay procedure. The improved method was adopted to screen for Plasmodium SHMT inhibitors, of which some were originally designed

  9. Sensitivity and specificity of the empirical lymphocyte genome sensitivity (LGS) assay: implications for improving cancer diagnostics.

    Science.gov (United States)

    Anderson, Diana; Najafzadeh, Mojgan; Gopalan, Rajendran; Ghaderi, Nader; Scally, Andrew J; Britland, Stephen T; Jacobs, Badie K; Reynolds, P Dominic; Davies, Justin; Wright, Andrew L; Al-Ghazal, Shariff; Sharpe, David; Denyer, Morgan C

    2014-10-01

    Lymphocyte responses from 208 individuals: 20 with melanoma, 34 with colon cancer, and 4 with lung cancer (58), 18 with suspected melanoma, 28 with polyposis, and 10 with COPD (56), and 94 healthy volunteers were examined. The natural logarithm of the Olive tail moment (OTM) was plotted for exposure to UVA through 5 different agar depths (100 cell measurements/depth) and analyzed using a repeated measures regression model. Responses of patients with cancer plateaued after treatment with different UVA intensities, but returned toward control values for healthy volunteers. For precancerous conditions and suspected cancers, intermediate responses occurred. ROC analysis of mean log OTMs, for cancers plus precancerous/suspect conditions vs. controls, cancer vs. precancerous/suspect conditions plus controls, and cancer vs. controls, gave areas under the curve of 0.87, 0.89, and 0.93, respectively (P<0.001). Optimization allowed test sensitivity or specificity to approach 100% with acceptable complementary measures. This modified comet assay could represent a stand-alone test or an adjunct to other investigative procedures for detecting cancer.

  10. Towards improvement of aluminium assay in quartz for in situ cosmogenic 26Al analysis at ANSTO

    Science.gov (United States)

    Fujioka, Toshiyuki; Fink, David; Mifsud, Charles

    2015-10-01

    Accuracy and precision in the measurement of natural aluminium abundances in quartz can affect the reliability of 26Al exposure dating and 26Al/10Be burial dating. At ANSTO, aliquots extracted from the HF solutions of dissolved quartz are treated in our laboratory, whereas ICP-OES analysis is performed at a commercial laboratory. The long-term inter-run reproducibility of our in-house standards show a limiting precision in Al measurements of 3-4% (1σ), which is lower than the claimed precision of Al analysis by ICP-OES. This indicates that unaccounted random errors are incorporated during our aliquot preparation. In this study, we performed several controlled tests to investigate effects of possible inconsistencies and variances during our aliquot preparation procedure. The results indicate that our procedure is robust against any subtle change in the preparation procedure, e.g., fuming temperatures, fuming reagents, and drying conditions. We found that the density of the solutions dispatched for ICP analysis is occasionally variable due to the presence of residual fuming reagents in the solution. A comparison of the results between the calibration curve and standard addition methods show that the former results are consistently lower than the latter by up to ∼14%. Similar offsets have been reported by previous studies. The reason for these discrepancies is mostly likely matrix effect, which is not accounted for by the calibration curve method. Further tests by varying matrix with impurities such as HF, HClO4, H2SO4 and Si identified that Si could cause lower offset in Al measurements; however, our ICP solutions are confirmed to be free from Si and the cause of matrix effect remains to be investigated. Hence, care must be taken for the measurement of Al concentrations in quartz by ICP-OES, either by ensuring that matrix effect is fully accounted for or by routinely employing standard additions when required.

  11. An improved behavioural assay demonstrates that ultrasound vocalizations constitute a reliable indicator of chronic cancer pain and neuropathic pain

    Directory of Open Access Journals (Sweden)

    Selvaraj Deepitha

    2010-03-01

    Full Text Available Abstract Background On-going pain is one of the most debilitating symptoms associated with a variety of chronic pain disorders. An understanding of mechanisms underlying on-going pain, i.e. stimulus-independent pain has been hampered so far by a lack of behavioural parameters which enable studying it in experimental animals. Ultrasound vocalizations (USVs have been proposed to correlate with pain evoked by an acute activation of nociceptors. However, literature on the utility of USVs as an indicator of chronic pain is very controversial. A majority of these inconsistencies arise from parameters confounding behavioural experiments, which include novelty, fear and stress due to restrain, amongst others. Results We have developed an improved assay which overcomes these confounding factors and enables studying USVs in freely moving mice repetitively over several weeks. Using this improved assay, we report here that USVs increase significantly in mice with bone metastases-induced cancer pain or neuropathic pain for several weeks, in comparison to sham-treated mice. Importantly, analgesic drugs which are known to alleviate tumour pain or neuropathic pain in human patients significantly reduce USVs as well as mechanical allodynia in corresponding mouse models. Conclusions We show that studying USVs and mechanical allodynia in the same cohort of mice enables comparing the temporal progression of on-going pain (i.e. stimulus-independent pain and stimulus-evoked pain in these clinically highly-relevant forms of chronic pain.

  12. An improved sensitive assay for the detection of PSP toxins with neuroblastoma cell-based impedance biosensor.

    Science.gov (United States)

    Zou, Ling; Wu, Chunsheng; Wang, Qin; Zhou, Jie; Su, Kaiqi; Li, Hongbo; Hu, Ning; Wang, Ping

    2015-05-15

    Paralytic shellfish poisoning (PSP) toxins are well-known sodium channel-blocking marine toxins, which block the conduction of nerve impulses and lead to a series of neurological disorders symptoms. However, PSP toxins can inhibit the cytotoxicity effect of compounds (e.g., ouabain and veratridine). Under the treatment of ouabain and veratridine, neuroblastoma cell will swell and die gradually, since veratridine causes the persistent inflow of Na(+) and ouabain inhibits the activity of Na(+)/K(+)-ATPases. Therefore, PSP toxins with antagonism effect can raise the chance of cell survival by blocking inflow of Na(+). Based on the antagonism effect of PSP toxins, we designed an improved cell-based assay to detect PSP toxins using a neuroblastoma cell-based impedance biosensor. The results demonstrated that this biosensor showed high sensitivity and good specificity for saxitoxins detection. The detection limit of this biosensor was as low as 0.03 ng/ml, which was lower than previous reported cell-based assays and mouse bioassays. With the improvement of biosensor performance, the neuroblastoma cell-based impedance biosensor has great potential to be a universal PSP screening method.

  13. Use of Low-Level Sensor Data to Improve the Accuracy of Bluetooth-Based Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Christensen, Lars Tørholm; Krishnan, Rajesh

    2013-01-01

    by a single device. The latter situation could lead to location ambiguity and could reduce the accuracy of travel time estimation. Therefore, the accuracy of travel time estimation by Bluetooth technology depends on how location ambiguity is handled by the estimation method. The issue of multiple detection...... events in the context of travel time estimation by Bluetooth technology has been considered by various researchers. However, treatment of this issue has been simplistic. Most previous studies have used the first detection event (enter-enter) as the best estimate. No systematic analysis has been conducted...... to explore the most accurate method of travel time estimation with multiple detection events. In this study, different aspects of the Bluetooth detection zone, including size and impact on the accuracy of travel time estimation, were discussed. Four methods were applied to estimate travel time: enter...

  14. Dose evaluation using multiple-aliquot quartz OSL: Test of methods and a new protocol for improved accuracy and precision

    DEFF Research Database (Denmark)

    Jain, M.; Bøtter-Jensen, L.; Singhvi, A.K.

    2003-01-01

    Multiple-aliquot quartz OSL dose-response curves often suffer from substantial variability in the luminescence output from identically treated aliquots (scatter) that leads to large uncertainties in the equivalent-dose estimates. In this study, normalisation and its bearing on the accuracy...

  15. The Impact of Implicit Tasks on Improving the Learners' Writing in Terms of Autonomy and Grammatical Accuracy

    Science.gov (United States)

    Nazari, Nastaran

    2014-01-01

    This paper aims to explore the Iranian EFL (English as a Foreign Language) learners' ability to gain grammatical accuracy in their writing by noticing and correcting their own grammatical errors. Recent literature in language acquisition has emphasized the role of implicit tasks in encouraging learners to develop autonomous language learning…

  16. EpCAM-based flow cytometry in cerebrospinal fluid greatly improves diagnostic accuracy of leptomeningeal metastases from epithelial tumors

    NARCIS (Netherlands)

    Milojkovic Kerklaan, B.; Pluim, Dick; Bol, Mijke; Hofland, Ingrid; Westerga, Johan; van Tinteren, Harm; Beijnen, Jos H; Boogerd, Willem; Schellens, Jan H M; Brandsma, Dieta

    2016-01-01

    BACKGROUND: Moderate diagnostic accuracy of MRI and initial cerebrospinal fluid (CSF) cytology analysis results in at least 10%-15% false negative diagnoses of leptomeningeal metastases (LM) of solid tumors, thus postponing start of therapy. The aim of this prospective clinical study was to determin

  17. Development of C-reactive protein certified reference material NMIJ CRM 6201-b: optimization of a hydrolysis process to improve the accuracy of amino acid analysis.

    Science.gov (United States)

    Kato, Megumi; Kinumi, Tomoya; Yoshioka, Mariko; Goto, Mari; Fujii, Shin-Ichiro; Takatsu, Akiko

    2015-04-01

    To standardize C-reactive protein (CRP) assays, the National Metrology Institute of Japan (NMIJ) has developed a C-reactive protein solution certified reference material, CRM 6201-b, which is intended for use as a primary reference material to enable the SI-traceable measurement of CRP. This study describes the development process of CRM 6201-b. As a candidate material of the CRM, recombinant human CRP solution was selected because of its higher purity and homogeneity than the purified material from human serum. Gel filtration chromatography was used to examine the homogeneity and stability of the present CRM. The total protein concentration of CRP in the present CRM was determined by amino acid analysis coupled to isotope-dilution mass spectrometry (IDMS-AAA). To improve the accuracy of IDMS-AAA, we optimized the hydrolysis process by examining the effect of parameters such as the volume of protein samples taken for hydrolysis, the procedure of sample preparation prior to the hydrolysis, hydrolysis temperature, and hydrolysis time. Under optimized conditions, we conducted two independent approaches in which the following independent hydrolysis and liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) were combined: one was vapor-phase acid hydrolysis (130 °C, 24 h) and hydrophilic interaction liquid chromatography-mass spectrometry (HILIC-MS) method, and the other was microwave-assisted liquid-phase acid hydrolysis (150 °C, 3 h) and pre-column derivatization liquid chromatography-tandem mass spectrometry (LC-MS/MS) method. The quantitative values of the two different amino acid analyses were in agreement within their uncertainties. The certified value was the weighted mean of the results of the two methods. Uncertainties from the value-assignment method, between-method variance, homogeneity, long-term stability, and short-term stability were taken into account in evaluating the uncertainty for a certified value. The certified value and the

  18. Improving the measurement of longitudinal change in renal function: automated detection of changes in laboratory creatinine assay

    Directory of Open Access Journals (Sweden)

    Norman Poh

    2015-04-01

    Full Text Available IntroductionRenal function is reported using the estimates of glomerular filtration rate (eGFR. However, eGFR values are recorded without reference to the particular serum creatinine (SCr assays used to derive them, and newer assays were introduced at different time points across the laboratories in the United Kingdom. These changes may cause systematic bias in eGFR reported in routinely collected data, even though laboratory-reported eGFR values have a correction factor applied.DesignAn algorithm to detect changes in SCr that in turn affect eGFR calculation method was developed. It compares the mapping of SCr values on to eGFR values across a time series of paired eGFR and SCr measurements.SettingRoutinely collected primary care data from 20,000 people with the richest renal function data from the quality improvement in chronic kidney disease trial.ResultsThe algorithm identified a change in eGFR calculation method in 114 (90% of the 127 included practices. This change was identified in 4736 (23.7% patient time series analysed. This change in calibration method was found to cause a significant step change in the reported eGFR values, producing a systematic bias. The eGFR values could not be recalibrated by applying the Modification of Diet in Renal Disease equation to the laboratory reported SCr values.ConclusionsThis algorithm can identify laboratory changes in eGFR calculation methods and changes in SCr assay. Failure to account for these changes may misconstrue renal function changes over time. Researchers using routine eGFR data should account for these effects.  

  19. SPH accuracy improvement through the combination of a quasi-Lagrangian shifting transport velocity and consistent ALE formalisms

    Science.gov (United States)

    Oger, G.; Marrone, S.; Le Touzé, D.; de Leffe, M.

    2016-05-01

    This paper addresses the accuracy of the weakly-compressible SPH method. Interpolation defects due to the presence of anisotropic particle structures inherent to the Lagrangian character of the Smoothed Particle Hydrodynamics (SPH) method are highlighted. To avoid the appearance of these structures which are detrimental to the quality of the simulations, a specific transport velocity is introduced and its inclusion within an Arbitrary Lagrangian Eulerian (ALE) formalism is described. Unlike most of existing particle disordering/shifting methods, this formalism avoids the formation of these anisotropic structures while a full consistency with the original Euler or Navier-Stokes equations is maintained. The gain in accuracy, convergence and numerical diffusion of this formalism is shown and discussed through its application to various challenging test cases.

  20. Cost-effective improvements of a rotating platform by integration of a high-accuracy inclinometer and encoders for attitude evaluation

    Science.gov (United States)

    Wen, Chenyang; He, Shengyang; Bu, Changgen; Hu, Peida

    2017-01-01

    Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy.

  1. 肝素生物测定法(血浆法)方法学改进研究%Methodology Improvement for Biological Assay of Heparin

    Institute of Scientific and Technical Information of China (English)

    吴超权; 周智; 覃君良; 方珍文

    2015-01-01

    Objective:To improve the plasma method for biological assay of heparin based on China Pharmacopeia currently in effect.Methods:National standard compound of heparin and a batch of sodium heparin injection were chosen, and the coagulation time of the standard heparin and heparin injection was tested by platelet aggregation and coagulation analyzer using 1∶1 diluted rabbit plasma with sodium chloride injection, according to the method collected in appendix XII B of China Pharmacopeia Volume II (version 2010) .Results: Compared with the standard target potency, the recovery rate of the standard was in the range of 98.88%~101.86%, while the recovery rate of the sodium heparin injection was in the range of 102.3%~103.6%.Conclusion:This new method for heparin determination is easy to operate, with objective end-point high accuracy and repetitiveness, and the conifdence limits meet the requirement, we therefore suggest this new method as the improved method for biological assay of heparin.%目的:针对现行《中国药典》附录中使用的肝素生物测定法的血浆法方法学进行改进。方法:选用肝素钠国家标准品和肝素钠注射液1批,采用1∶1氯化钠注射液稀释的兔血浆,用血小板聚集凝血因子分析仪,按《中国药典》2010年版二部附录XⅡ B方法对肝素钠标准品和肝素钠注射液的凝结时间进行测定。结果:已知靶值的标准品效价的回收率为98.88%~101.86%,肝素钠注射液加样回收率为102.30%~103.60%。结论:该肝素测定方法操作简单,终点客观,重现性和准确性好,可信限满足要求,推荐使用该方法作为肝素生物效价测定方法。

  2. Improving the accuracy of simulation of radiation-reaction effects with implicit Runge-Kutta-Nyström methods.

    Science.gov (United States)

    Elkina, N V; Fedotov, A M; Herzing, C; Ruhl, H

    2014-05-01

    The Landau-Lifshitz equation provides an efficient way to account for the effects of radiation reaction without acquiring the nonphysical solutions typical for the Lorentz-Abraham-Dirac equation. We solve the Landau-Lifshitz equation in its covariant four-vector form in order to control both the energy and momentum of radiating particles. Our study reveals that implicit time-symmetric collocation methods of the Runge-Kutta-Nyström type are superior in accuracy and better at maintaining the mass-shell condition than their explicit counterparts. We carry out an extensive study of numerical accuracy by comparing the analytical and numerical solutions of the Landau-Lifshitz equation. Finally, we present the results of the simulation of particle scattering by a focused laser pulse. Due to radiation reaction, particles are less capable of penetrating into the focal region compared to the case where radiation reaction is neglected. Our results are important for designing forthcoming experiments with high intensity laser fields.

  3. Improving the accuracy of the structure prediction of the third hypervariable loop of the heavy chains of antibodies.

    KAUST Repository

    Messih, Mario Abdel

    2014-06-13

    MOTIVATION: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity and specificity. The canonical structure model allows high accuracy prediction for five of the loops. The third loop of the heavy chain, H3, is the hardest to predict because of its diversity in structure, length and sequence composition. RESULTS: We describe a method, based on the Random Forest automatic learning technique, to select structural templates for H3 loops among a dataset of candidates. These can be used to predict the structure of the loop with a higher accuracy than that achieved by any of the presently available methods. The method also has the advantage of being extremely fast and returning a reliable estimate of the model quality. AVAILABILITY AND IMPLEMENTATION: The source code is freely available at http://www.biocomputing.it/H3Loopred/ .

  4. Development of Phage Lysin LysA2 for Use in Improved Purity Assays for Live Biotherapeutic Products

    Directory of Open Access Journals (Sweden)

    Sheila M. Dreher-Lesnick

    2015-12-01

    Full Text Available Live biotherapeutic products (LBPs, commonly referred to as probiotics, are typically preparations of live bacteria, such as Lactobacillus and Bifidobacterium species that are considered normal human commensals. Popular interest in probiotics has been increasing with general health benefits being attributed to their consumption, but there is also growing interest in evaluating such products for treatment of specific diseases. While over-the-counter probiotics are generally viewed as very safe, at least in healthy individuals, it must be remembered that clinical studies to assess these products may be done in individuals whose defenses are compromised, such as through a disease process, immunosuppressive clinical treatment, or an immature or aging immune system. One of the major safety criteria for LBPs used in clinical studies is microbial purity, i.e., the absence of extraneous, undesirable microorganisms. The main goal of this project is to develop recombinant phage lysins as reagents for improved purity assays for LBPs. Phage lysins are hydrolytic enzymes containing a cell binding domain that provides specificity and a catalytic domain responsible for lysis and killing. Our approach is to use recombinant phage lysins to selectively kill target product bacteria, which when used for purity assays will allow for outgrowth of potential contaminants under non-selective conditions, thus allowing an unbiased assessment of the presence of contaminants. To develop our approach, we used LysA2, a phage lysin with reported activity against a broad range of Lactobacillus species. We report the lytic profile of a non-tagged recombinant LysA2 against Lactobacillus strains in our collection. We also present a proof-of-concept experiment, showing that addition of partially purified LysA2 to a culture of Lactobacillus jensenii (L. jensenii spiked with low numbers of Escherichia coli (E. coli or Staphylococcus aureus (S. aureus effectively eliminates or knocks

  5. Point-of-Care Multi-Organ Ultrasound Improves Diagnostic Accuracy in Adults Presenting to the Emergency Department with Acute Dyspnea

    Directory of Open Access Journals (Sweden)

    Daniel Mantuani, MD

    2016-01-01

    Full Text Available Introduction: Determining the etiology of acute dyspnea in emregency department (ED patients is often difficult. Point-of-care ultrasound (POCUS holds promise for improving immediate diagnostic accuracy (after history and physical, thus improving use of focused therapies. We evaluate the impact of a three-part POCUS exam, or “triple scan” (TS – composed of abbreviated echocardiography, lung ultrasound and inferior vena cava (IVC collapsibility assessment – on the treating physician’s immediate diagnostic impression. Methods: A convenience sample of adults presenting to our urban academic ED with acute dyspnea (Emergency Severity Index 1, 2 were prospectively enrolled when investigator sonographers were available. The method for performing components of the TS has been previously described in detail. Treating physicians rated the most likely diagnosis after history and physical but before other studies (except electrocardiogram returned. An investigator then performed TS and disclosed the results, after which most likely diagnosis was reassessed. Final diagnosis (criterion standard was based on medical record review by expert emergency medicine faculty blinded to TS result. We compared accuracy of pre-TS and post-TS impression (primary outcome with McNemar’s test. Test characteristics for treating physician impression were also calculated by dichotomizing acute decompensated heart failure (ADHF, chronic obstructive pulmonary disease (COPD and pneumonia as present or absent. Results: 57 patients were enrolled with the leading final diagnoses being ADHF (26%, COPD/ asthma (30%, and pneumonia (28%. Overall accuracy of the treating physician’s impression increased from 53% before TS to 77% after TS (p=0.003. The post-TS impression was 100% sensitive and 84% specific for ADHF. Conclusion: In this small study, POCUS evaluation of the heart, lungs and IVC improved the treating physician’s immediate overall diagnostic accuracy for ADHF

  6. Improving optical fiber current sensor accuracy using artificial neural networks to compensate temperature and minor non-ideal effects

    Science.gov (United States)

    Zimmermann, Antonio C.; Besen, Marcio; Encinas, Leonardo S.; Nicolodi, Rosane

    2011-05-01

    This article presents a practical signal processing methodology, based on Artificial Neural Networks - ANN, to process the measurement signals of typical Fiber Optic Current Sensors - FOCS, achieving higher accuracy from temperature and non-linearity compensation. The proposed idea resolve FOCS primary problems, mainly when it is difficult to determine all errors sources present in the physical phenomenon or the measurement equation becomes too nonlinear to be applied in a wide measurement range. The great benefit of ANN is to get a transfer function for the measurement system taking in account all unknowns, even those from unwanted and unknowing effects, providing a compensated output after the ANN training session. Then, the ANN training is treated like a black box, based on experimental data, where the transfer function of the measurement system, its unknowns and non-idealities are processed and compensated at once, given a fast and robust alternative to the FOCS theoretical method. A real FOCS system was built and the signals acquired from the photo-detectors are processed by the Faraday's Laws formulas and the ANN method, giving measurement results for both signal processing strategies. The coil temperature measurements are also included in the ANN signal processing. To compare these results, a current measuring instrument standard is used together with a metrological calibration procedure. Preliminary results from a variable temperature experiment shows the higher accuracy, better them 0.2% of maximum error, of the ANN methodology, resulting in a quick and robust method to hands with FOCS difficulties on of non-idealities compensation.

  7. A pilot study to determine whether using a lightweight, wearable micro-camera improves dietary assessment accuracy and offers information on macronutrients and eating rate.

    Science.gov (United States)

    Pettitt, Claire; Liu, Jindong; Kwasnicki, Richard M; Yang, Guang-Zhong; Preston, Thomas; Frost, Gary

    2016-01-14

    A major limitation in nutritional science is the lack of understanding of the nutritional intake of free-living people. There is an inverse relationship between accuracy of reporting of energy intake by all current nutritional methodologies and body weight. In this pilot study we aim to explore whether using a novel lightweight, wearable micro-camera improves the accuracy of dietary intake assessment. Doubly labelled water (DLW) was used to estimate energy expenditure and intake over a 14-d period, over which time participants (n 6) completed a food diary and wore a micro-camera on 2 of the days. Comparisons were made between the estimated energy intake from the reported food diary alone and together with the images from the micro-camera recordings. There was an average daily deficit of 3912 kJ using food diaries to estimate energy intake compared with estimated energy expenditure from DLW (P=0·0118), representing an under-reporting rate of 34 %. Analysis of food diaries alone showed a significant deficit in estimated daily energy intake compared with estimated intake from food diary analysis with images from the micro-camera recordings (405 kJ). Use of the micro-camera images in conjunction with food diaries improves the accuracy of dietary assessment and provides valuable information on macronutrient intake and eating rate. There is a need to develop this recording technique to remove user and assessor bias.

  8. On improving the accuracy of time synchronization in the power system%基于FPGA的时间同步精度的设计与实现

    Institute of Scientific and Technical Information of China (English)

    宋鹏; 田乐

    2014-01-01

    In order to improve the accuracy of the system ,the transmission of time information was demodulated based on FPGA .The COMPASS timing accuracy as a basis ,IRIG-B code as the transmission timing information , in the use of FPGA demodulated IRIG-B code to introduced digital Costas loop ,can well be extracted IRIG-B code zero crossing information ,to avoid the zero-crossing detection circuit and pulse jitter zero drift problems . Simulation results showed that the algorithm reduce the IRIG -B code synchronization errors and improve accura-cy for time to reach the power system on the accuracy requirements .%为了提高对时系统的精度,利用 FPGA 对传输对时信息进行解调。以北斗卫星的授时精度为基础,传输对时信息采用 IRIG-B 码,在 FPGA 对 IRIG-B 码解调中引入全数字 Costas 环,能够很好地提取出 IRIG-B 码的过零点信息,避免了过零点检测电路的零点漂移和脉冲抖动等问题。仿真结果表明,该算法减小了 IRIG-B 码的同步误差,提高了对时精度,达到电力系统中对时的精度要求。

  9. Dynamic sea surface topography, gravity, and improved orbit accuracies from the direct evaluation of Seasat altimeter data

    Science.gov (United States)

    Marsh, J. G.; Koblinsky, C. J.; Lerch, F.; Klosko, S. M.; Robbins, J. W.

    1990-01-01

    A gravitational model incorporating Seasat altimetry, surface gravimetry, and satellite tracking data has been determined in terms of global spherical harmonics complete to degree and order 50. This model, PGS-3337, uses altimeter data as a dynamic observation of the satellite's height above the sea surface. A solution for the ocean's dynamic topography is recovered simultaneously with the orbit parameters, gravity, and ocean tidal terms. The recovered dynamic topography reveals the global long wavelength circulation of the oceans with a resolution of 2000 km and is very similar to the mean upper ocean dynamic height derived from historical ship observations. The PGS-3337 geoid has an uncertainty of 60 cm rms globally but 25 cm rms over the ocean because of the altimeter measurements. Seasat orbits determined in this solution have an estimated accuracy for the radial position of 20 cm rms. The difference between the altimeter observed sea height and the geoid plus dynamic topography model is 30 cm rms. Contained in these residuals are the sea height variability, as well as errors from the geoid, orbits, tidal models, and altimeter range measurement. This performance level is 2 to 3 times better than that achieved with previous Goddard gravitational models.

  10. Improvements in the percent range. Machine learning improves accuracy of forecasting for wind power generation; Verbesserungen im Prozentbereich. Maschinelles Lernen steigert Prognosegenauigkeit bei Windkrafterzeugung

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2011-07-01

    The more accurate the availability of wind power can be predicted, the more better electricity can be placed on the market and the less control energy is required. Machine learning methods allow a further increase in the accuracy of forecasting. However, this requires enormous computational resources. Researchers at the Centre on Research of Solar Energy and Hydrogen (Stuttgart, Federal Republic of Germany) use powerful graphics processors. Project partner EWC Weather Consult GmbH (Karlsruhe, Federal Republic of Germany) combines various weather models to the best possible prediction by means of machine learning. This new offer takes particular interest to producers who want to go into the direct marketing of wind power.

  11. Improving accuracy in coronary lumen segmentation via explicit calcium exclusion, learning-based ray detection and surface optimization

    Science.gov (United States)

    Lugauer, Felix; Zhang, Jingdan; Zheng, Yefeng; Hornegger, Joachim; Kelm, B. Michael

    2014-03-01

    Invasive cardiac angiography (catheterization) is still the standard in clinical practice for diagnosing coronary artery disease (CAD) but it involves a high amount of risk and cost. New generations of CT scanners can acquire high-quality images of coronary arteries which allow for an accurate identification and delineation of stenoses. Recently, computational fluid dynamics (CFD) simulation has been applied to coronary blood flow using geometric lumen models extracted from CT angiography (CTA). The computed pressure drop at stenoses proved to be indicative for ischemia-causing lesions, leading to non-invasive fractional flow reserve (FFR) derived from CTA. Since the diagnostic value of non-invasive procedures for diagnosing CAD relies on an accurate extraction of the lumen, a precise segmentation of the coronary arteries is crucial. As manual segmentation is tedious, time-consuming and subjective, automatic procedures are desirable. We present a novel fully-automatic method to accurately segment the lumen of coronary arteries in the presence of calcified and non-calcified plaque. Our segmentation framework is based on three main steps: boundary detection, calcium exclusion and surface optimization. A learning-based boundary detector enables a robust lumen contour detection via dense ray-casting. The exclusion of calcified plaque is assured through a novel calcium exclusion technique which allows us to accurately capture stenoses of diseased arteries. The boundary detection results are incorporated into a closed set formulation whose minimization yields an optimized lumen surface. On standardized tests with clinical data, a segmentation accuracy is achieved which is comparable to clinical experts and superior to current automatic methods.

  12. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    Science.gov (United States)

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  13. Improving the accuracy of ultrafast ligand-based screening: incorporating lipophilicity into ElectroShape as an extra dimension.

    Science.gov (United States)

    Armstrong, M Stuart; Finn, Paul W; Morris, Garrett M; Richards, W Graham

    2011-08-01

    In a previous paper, we presented the ElectroShape method, which we used to achieve successful ligand-based virtual screening. It extended classical shape-based methods by applying them to the four-dimensional shape of the molecule where partial charge was used as the fourth dimension to capture electrostatic information. This paper extends the approach by using atomic lipophilicity (alogP) as an additional molecular property and validates it using the improved release 2 of the Directory of Useful Decoys (DUD). When alogP replaced partial charge, the enrichment results were slightly below those of ElectroShape, though still far better than purely shape-based methods. However, when alogP was added as a complement to partial charge, the resulting five-dimensional enrichments shows a clear improvement in performance. This demonstrates the utility of extending the ElectroShape virtual screening method by adding other atom-based descriptors.

  14. Symptom-dependent cut-offs of urine metanephrines improve diagnostic accuracy for detecting pheochromocytomas in two separate cohorts, compared to symptom-independent cut-offs.

    Science.gov (United States)

    Cho, Yoon Young; Song, Kee-Ho; Kim, Young Nam; Ahn, Seong Hee; Kim, Hyeonmok; Park, Sooyoun; Suh, Sunghwan; Kim, Beom-Jun; Lee, Soo-Youn; Chun, Sail; Koh, Jung-Min; Lee, Seung Hun; Kim, Jae Hyeon

    2016-10-01

    The development of advanced imaging techniques has increased the detection of subclinical pheochromocytomas. Because of the substantial proportions of subclinical pheochromocytomas, measurement of urine metanephrine concentrations is crucial due to detect or exclude pheochromocytoma. Although urine metanephrines are elevated in symptomatic subjects, diagnostic cut-offs according to the presence of adrenergic symptoms have not been studied. Pheochromocytomas patients who underwent adrenalectomy at Samsung Medical Center and a control group were compared to determine cut-off concentrations of urine metanephrines. An independent population was analyzed for urine metanephrines with different kits to validate the improvement in diagnostic accuracy using adjusted cut-offs. Symptom-dependent cut-offs of urine metanephrines were higher for symptomatic patients (307 μg/day in males, 235 μg/day in females for urine metanephrine, and 1,045 μg/day in males and 457 μg/day in females for urine normetanephrine) than for asymptomatic patients (206 μg/day in males, 199 μg/day in females for urine metanephrine, and 489 μg/day in males and 442 μg/day in females for urine normetanephrine). Symptom-dependent cut-offs of urine metanephrines improved a specificity from 92.7 % to 96.3 % and a high sensitivity of 97.8 % was maintained. Using the Symptom-dependent cut-offs raised diagnostic accuracy by 5.5 % (p <0.001). Similar trend was also observed in an independent population using different hormone kits. Using symptom-dependent cut-offs of urine metanephrines in symptomatic patients for pheochromocytomas resulted in a significant improvement in diagnostic accuracy in two separate cohorts.

  15. An Improved Droop Control Method for DC Microgrids Based on Low Bandwidth Communication with DC Bus Voltage Restoration and Enhanced Current Sharing Accuracy

    DEFF Research Database (Denmark)

    Lu, Xiaonan; Guerrero, Josep M.; Sun, Kai;

    2014-01-01

    resistance in a droop-controlled dc microgrid, since the output voltage of each converter cannot be exactly the same, the output current sharing accuracy is degraded. Second, the DC bus voltage deviation increases with the load due to the droop action. In this paper, in order to improve the performance......, and the LBC system is only used for changing the values of the dc voltage and current. Hence, a decentralized control scheme is accomplished. The simulation test based on Matlab/Simulink and the experimental validation based on a 2×2.2 kW prototype were implemented to demonstrate the proposed approach....

  16. Diagnosis of heart failure with preserved ejection fraction: improved accuracy with the use of markers of collagen turnover.

    LENUS (Irish Health Repository)

    Martos, Ramon

    2012-02-01

    AIMS: Heart failure with preserved ejection fraction (HF-PEF) can be difficult to diagnose in clinical practice. Myocardial fibrosis is a major determinant of diastolic dysfunction (DD), potentially contributing to the progression of HF-PEF. The aim of this study was to analyse whether serological markers of collagen turnover may predict HF-PEF and DD. METHODS AND RESULTS: We included 85 Caucasian treated hypertensive patients (DD n=65; both DD and HF-PEF n=32). Serum carboxy (PICP), amino (PINP), and carboxytelo (CITP) peptides of procollagen type I, amino (PIIINP) peptide of procollagen type III, matrix metalloproteinases (MMP-1, MMP-2, and MMP-9), and tissue inhibitor of MMP levels were assayed. Using receiver operating characteristic curve analysis, MMP-2 (AUC=0.91; 95% CI: 0.84, 0.98), CITP (0.83; 0.72, 0.92), PICP (0.82; 0.72, 0.92), B-type natriuretic peptide (BNP) (0.82; 0.73, 0.91), MMP-9 (0.79; 0.68, 0.89), and PIIINP (0.78; 0.66, 0.89) levels were significant predictors of HF-PEF (P<0.01 for all). Carboxytelo peptides of procollagen type I (AUC=0.74; 95% CI: 0.62, 0.86), MMP-2 (0.73; 0.62, 0.84), PIIINP (0.73; 0.60, 0.85), BNP (0.69; 0.55, 0.83) and PICP (0.66; 0.54, 0.78) levels were significant predictors of DD (P<0.05 for all). A cutoff of 1585 ng\\/mL for MMP-2 provided 91% sensitivity and 76% specificity for predicting HF-PEF and combinations of biomarkers could be used to adjust either sensitivity or specificity. CONCLUSION: Markers of collagen turnover identify patients with HF-PEF and DD. Matrix metalloproteinase 2 may be more useful than BNP in the identification of HF-PEF. This suggests that these new biochemical tools may assist in identifying patients with these diagnostically challenging conditions.

  17. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2016-01-01

    modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity....... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...

  18. Improve the Absolute Accuracy of Ozone Intensities in the 9-11 μm Region via Mw/ir Multi-Wavelength Spectroscopy

    Science.gov (United States)

    Yu, Shanshan; Drouin, Brian

    2016-06-01

    Ozone (O_3) is crucial for studies of air quality, human and crop health, and radiative forcing. Spectroscopic remote sensing techniques have been extensively employed to investigate ozone globally and regionally. Infrared intensities of ≤1% accuracy are desired by the remote sensing community. The accuracy of the current state-of-the-art infrared ozone intensities is on the order of 4-10%, resulting in ad hoc intensity scaling factors for consistent atmospheric retrievals. The large uncertainties on the infrared ozone intensities arise from the fact that pure ozone is very difficult to generate and sustain in the laboratory. Best estimates have employed IR/UV cross beam experiments to determine the accurate O_3 volume mixing ratio of the sample through its standard cross section value at 254 nm. This presentation reports our effort to improve the absolute accuracy of ozone intensities in the 9-11 μm region via a transfer of the precision of the rotational dipole moment onto the infrared measurement (MW/IR). Our approach was to use MW/IR cross beam experiments and determine the O_3 mixing ratio through alternately measuring pure rotation ozone lines from 692 to 779 GHz. The uncertainty of these pure rotation line intensities is better than 0.1%. The sample cell was a slow flow cross cell and the total pressure inside the sample cell was maintained constant through a proportional-integral-derivative (PID) flow control. Five infrared O_3 spectra were obtained, with a path length of 3.74 m, pressures ranging from 30 to 120 mTorr, and mixing ratio ranging from 0.5 to 0.9. A multi spectrum fitting technique was employed to fit all the FTS spectra simultaneously. The results show that we can determine intensities of the 9.6μm band with absolute accuracy better than 4%.

  19. hARACNe: improving the accuracy of regulatory model reverse engineering via higher-order data processing inequality tests.

    Science.gov (United States)

    Jang, In Sock; Margolin, Adam; Califano, Andrea

    2013-08-01

    A key goal of systems biology is to elucidate molecular mechanisms associated with physiologic and pathologic phenotypes based on the systematic and genome-wide understanding of cell context-specific molecular interaction models. To this end, reverse engineering approaches have been used to systematically dissect regulatory interactions in a specific tissue, based on the availability of large molecular profile datasets, thus improving our mechanistic understanding of complex diseases, such as cancer. In this paper, we introduce high-order Algorithm for the Reconstruction of Accurate Cellular Network (hARACNe), an extension of the ARACNe algorithm for the dissection of transcriptional regulatory networks. ARACNe uses the data processing inequality (DPI), from information theory, to detect and prune indirect interactions that are unlikely to be mediated by an actual physical interaction. Whereas ARACNe considers only first-order indirect interactions, i.e. those mediated by only one extra regulator, hARACNe considers a generalized form of indirect interactions via two, three or more other regulators. We show that use of higher-order DPI resulted in significantly improved performance, based on transcription factor (TF)-specific ChIP-chip data, as well as on gene expression profile following RNAi-mediated TF silencing.

  20. Identifying parentage using molecular markers: improving accuracy of studbook records for a captive flock of marabou storks (Leptoptilos crumeniferus).

    Science.gov (United States)

    Ferrie, Gina M; Cohen, Ocean R; Schutz, Paul; Leighty, Katherine A; Plasse, Chelle; Bettinger, Tammie L; Hoffman, Eric A

    2013-01-01

    Extra-pair copulations (EPCs) leading to extra-pair fertilization (EPF) are common in avian mating systems, despite the prevalence of observed social monogamy in many species. Colonially breeding birds are interesting species to investigate the prevalence of EPCs and EPF because they show nesting habits including close proximity of nest sites and sexual partners, which are proposed to promote alternative reproductive tactics. Endemic to Africa, the colonial marabou stork (Leptoptilos crumeniferus) is one of the most commonly held avian species in North American zoos. The aims of this study were to use genetic information to verify parentage in a population of marabou stork housed at Disney's Animal Kingdom® based on five microsatellite loci and to investigate reproductive behavior. We compared genetic analyses of parents and offspring to studbook data collected through behavioral observations of parental behavior at the nest. Using genetic analyses to reconstruct the pedigree of the marabou stork flock using the program COLONY led to improvement of studbook records by determining parentage of an individual that had previously unknown parentage, and identified one individual that had a sire that differed genetically from studbook records. An important contribution of our analyses was the identification and verification of the most likely parents for offspring hatched in this colony and improving incorrect or undocumented parentage in the studbook. Additionally, the colonial nature of this species makes it difficult to observe and understand reproductive behavior. Gaining better understanding of the mating system of a species is essential for successful breeding and captive management.

  1. Improved assay for differential diagnosis between Pompe disease and acid α-glucosidase pseudodeficiency on dried blood spots.

    Science.gov (United States)

    Shigeto, Shohei; Katafuchi, Tatsuya; Okada, Yuya; Nakamura, Kimitoshi; Endo, Fumio; Okuyama, Torayuki; Takeuchi, Hiroaki; Kroos, Marian A; Verheijen, Frans W; Reuser, Arnold J J; Okumiya, Toshika

    2011-05-01

    The high frequency (3.3-3.9%) of acid α-glucosidase pseudodeficiency, c.[1726G>A; 2065G>A] homozygote (AA homozygote), in Asian populations complicates newborn screening for Pompe disease (glycogen storage disease type II or acid maltase deficiency) on dried blood spots, since AA homozygotes have a considerably low enzyme activity. We observed that hemoglobin in the enzyme reaction solution strongly interferes with the fluorescence of 4-methylumbelliferone released from 4-methylumbelliferyl α-D-glucopyranoside (4MU-αGlc) by acid α-glucosidase. Therefore, we have searched for a method to effectively eliminate hemoglobin in the reaction solution. Hemoglobin precipitation with barium hydroxide and zinc sulfate (Ba/Zn method) carried out after the enzyme reaction considerably enhances the fluorescence intensity while it does not reduce the intensity to any extent as can occur with conventional deproteinization agents like trichloroacetic acid. The Ba/Zn method greatly improved the separation between 18 Japanese patients with Pompe disease and 70 unaffected AA homozygotes in a population of Japanese newborns in the assay with 4MU-αGlc on dried blood spots. No overlap was observed between both groups. We further examined acid α-glucosidase activity in fibroblasts from 11 Japanese patients and 57 Japanese unaffected individuals including 31 c.[1726G; 2065G] homozygotes, 18 c.[1726G; 2065G]/[1726A; 2065A] heterozygotes and 8 AA homozygotes to confirm that fibroblasts can be used for definitive diagnosis. The patients were reliably distinguished from three control groups. These data provide advanced information for the development of a simple and reliable newborn screening program with dried blood spots for Pompe disease in Asian populations.

  2. Flat panel X-ray detector with reduced internal scattering for improved attenuation accuracy and dynamic range

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Peter D. (Santa Fe, NM); Claytor, Thomas N. (White Rock, NM); Berry, Phillip C. (Albuquerque, NM); Hills, Charles R. (Los Alamos, NM)

    2010-10-12

    An x-ray detector is disclosed that has had all unnecessary material removed from the x-ray beam path, and all of the remaining material in the beam path made as light and as low in atomic number as possible. The resulting detector is essentially transparent to x-rays and, thus, has greatly reduced internal scatter. The result of this is that x-ray attenuation data measured for the object under examination are much more accurate and have an increased dynamic range. The benefits of this improvement are that beam hardening corrections can be made accurately, that computed tomography reconstructions can be used for quantitative determination of material properties including density and atomic number, and that lower exposures may be possible as a result of the increased dynamic range.

  3. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2016-01-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process...... modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...

  4. Control Factors of Improving the Accuracy of Food Inspection%提高食品检验准确性的控制因素

    Institute of Scientific and Technical Information of China (English)

    强克娟; 张哲

    2015-01-01

    Food quality and safety has an important relationship with people's life and health, and food inspection as the main method of food quality and safety control, its accuracy and reliability has become the focus of attention. In order to improve the accuracy of food inspection, food inspection staff should be prepared from the sample preparation, the choice of reagents, the preparation of the instrument, the use of test methods, laboratory environment, clean and the ability of the test personnel to do a good job, to reduce food safety risks, to ensure food safety. This paper analyzed the control factors of improving the accuracy of food inspection, and putted forward some reasonable suggestions.%食品的质量安全与人们群众的生命健康有着重要的关系,而食品检验作为食品质量安全控制的主要方法,其准确性、可靠性也随之成为关注的焦点。为提高食品检验的准确性,食品检验工作人员应该从样品的采集制备、试剂的选择、仪器的准备、检验方法的采用、实验室环境的整洁及检验人员的能力等方面做好相应工作,降低食品安全风险,确保食品安全。本文就食品检验准确性提高的控制因素进行分析,并提出几点合理的建议。

  5. MO-DE-210-05: Improved Accuracy of Liver Feature Motion Estimation in B-Mode Ultrasound for Image-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    O’Shea, T; Bamber, J; Harris, E [The Institute of Cancer Research & Royal Marsden, Sutton and London (United Kingdom)

    2015-06-15

    Purpose: In similarity-measure based motion estimation incremental tracking (or template update) is challenging due to quantization, bias and accumulation of tracking errors. A method is presented which aims to improve the accuracy of incrementally tracked liver feature motion in long ultrasound sequences. Methods: Liver ultrasound data from five healthy volunteers under free breathing were used (15 to 17 Hz imaging rate, 2.9 to 5.5 minutes in length). A normalised cross-correlation template matching algorithm was implemented to estimate tissue motion. Blood vessel motion was manually annotated for comparison with three tracking code implementations: (i) naive incremental tracking (IT), (ii) IT plus a similarity threshold (ST) template-update method and (iii) ST coupled with a prediction-based state observer, known as the alpha-beta filter (ABST). Results: The ABST method produced substantial improvements in vessel tracking accuracy for two-dimensional vessel motion ranging from 7.9 mm to 40.4 mm (with mean respiratory period: 4.0 ± 1.1 s). The mean and 95% tracking errors were 1.6 mm and 1.4 mm, respectively (compared to 6.2 mm and 9.1 mm, respectively for naive incremental tracking). Conclusions: High confidence in the output motion estimation data is required for ultrasound-based motion estimation for radiation therapy beam tracking and gating. The method presented has potential for monitoring liver vessel translational motion in high frame rate B-mode data with the required accuracy. This work is support by Cancer Research UK Programme Grant C33589/A19727.

  6. 浅析如何提高工程项目造价的准确性%ow to improve the accuracy of the project cost

    Institute of Scientific and Technical Information of China (English)

    毛晓丽; 刘滢

    2014-01-01

    The project cost is the need of various materials, labor,machinery consumption and cost accounting of capital for construction engineering project. The project cost is the foundation of project decision, the basis of investment planning and control investment , the basis of raising construction fund, an important index to evaluate the effect of investment, the reasonable distribution of benefits and the means of adjusting the industrial structure. So improving the accuracy of cost has important significance . This paper briefly analyzes various factors affecting thecost and the measures and methods to improve the accuracy of cost.%工程造价是对建筑工程项目所需各种材料、人工、机械消耗量及耗用资金的核算,是项目决策的依据、制定投资计划和控制投资的依据、筹集建设资金的依据、评价投资效果的重要指标以及利益合理分配和调节产业结构的手段,所以提高造价的准确性具有重要意义,本文简单地分析影响造价的各种因素和提高造价准确性的措施和方法。

  7. 提高基于北斗卫星无源定位精度的方法%Method of improving passive positioning accuracy based on Beidou satellite

    Institute of Scientific and Technical Information of China (English)

    张瑜; 刘莹; 贺秋瑞

    2013-01-01

    Beidou satellite has been used as external illuminator due to its features such as continuous coverage to our country, less motion relative to the earth, unapparent Doppler frequency, simple disturbance of adjacent channel and high security. Taking account of the location error caused by atmospheric refraction, a location error correction method was given, to further improve the position accuracy. The simulation results show that radar position errors could reduce correspondingly with the increase of elevation or the decrease of target altitude. Passive radar position accuracy is improved greatly by correcting atmospheric refraction.%鉴于我国的北斗卫星具有可对我国实现连续覆盖、产生的多普勒频移不明显、邻近信道的干扰单一、安全性高等特征,利用北斗卫星作为机会辐射源进行无源定位.考虑大气折射引起的定位误差,提出一种定位误差修正方法,使定位精度进一步提高.仿真结果表明,随着仰角增大或目标高度减小,雷达定位误差也相应减小.经大气折射误差修正后的无源雷达定位精度大大提高.

  8. Improving accuracy in the quantitation of overlapping, asymmetric, chromatographie peaks by deconvolution: theory and application to coupled gas chromatography atomic absorption spectrometry

    Science.gov (United States)

    Johansson, M.; Berglund, M.; Baxter, D. C.

    1993-09-01

    Systematic errors in the measurement of overlapping asymmetric, Chromatographic peaks are observed using the perpendicular-drop and tangent-skimming algorithms incorporated in commercial integrators. The magnitude of such errors increases with the degree of tailing and differences in peak size, and was found to be as great as 80% for peak-area and 100% for peak-height measurements made on the smaller, second component of simulated, noise-free chromatograms containing peaks at a size ratio of 10 to 1. Initial deconvolution of overlapping peaks, by mathematical correction for asymmetry, leads to significant improvements in the accuracy of both peak-area and height measurements using the simple, perpendicular-drop algorithm. A comparison of analytical data for the separation and determination of three organolead species by coupled gas chromatography atomic absorption spectrometry using peak-height and area measurements also demonstrates the improved accuracy obtained following deconvolution. It is concluded that the deconvolution method described could be beneficial in a variety of Chromatographic applications where overlapping, asymmetric peaks are observed.

  9. 利用SVM改进Adaboost算法的人脸检测精度%IMPROVING FACE DETECTION ACCURACY IN ADABOOST ALGORITHM WITH SVM

    Institute of Scientific and Technical Information of China (English)

    王志伟; 张晓龙; 梁文豪

    2011-01-01

    提出利用SVM分类方法改进Adaboost算法的人脸检测精度.该方法先通过Adaboost算法找出图像中的候选人脸区域,根据训练样本集中的人脸和非人脸样本训练出分类器支持向量机(SVM),然后通过SVM分类器从候选人脸区域中最终确定人脸区域.实验结果证明,SVM分类算法可以提高检测精度,使检测算法具有更好的检测效果.%This paper presents an approach to improve the face detection accuracy in Adaboost algorithm with SVM. Firstly, the method finds out candidate regions of the human face in the image, and trains the classifier of support vector machine (SVM) according to human face samples and non-face samples in the training sample set, then eventually determine the region of human face from candidate face regions by SVM classifier. Experimental results show that the SVM classifying algorithm can improve the detection accuracy and makes the detection algorithm better in detection efficiency.

  10. A method to improve the stability and accuracy of ANN- and SVM-based time series models for long-term groundwater level predictions

    Science.gov (United States)

    Yoon, Heesung; Hyun, Yunjung; Ha, Kyoochul; Lee, Kang-Kun; Kim, Gyoo-Bum

    2016-05-01

    The prediction of long-term groundwater level fluctuations is necessary to effectively manage groundwater resources and to assess the effects of changes in rainfall patterns on groundwater resources. In the present study, a weighted error function approach was utilised to improve the performance of artificial neural network (ANN)- and support vector machine (SVM)-based recursive prediction models for the long-term prediction of groundwater levels in response to rainfall. The developed time series models were applied to groundwater level data from 5 groundwater-monitoring stations in South Korea. The results demonstrated that the weighted error function approach can improve the stability and accuracy of recursive prediction models, especially for ANN models. The comparison of the model performance showed that the recursive prediction performance of the SVM was superior to the performance of the ANN in this case study.

  11. Improving accuracy of medication identification in an older population using a medication bottle color symbol label system

    Directory of Open Access Journals (Sweden)

    Cardarelli Roberto

    2011-12-01

    Full Text Available Abstract Background The purpose of this pilot study was to evaluate and refine an adjuvant system of color-specific symbols that are added to medication bottles and to assess whether this system would increase the ability of patients 65 years of age or older in matching their medication to the indication for which it was prescribed. Methods This study was conducted in two phases, consisting of three focus groups of patients from a family medicine clinic (n = 25 and a pre-post medication identification test in a second group of patient participants (n = 100. Results of focus group discussions were used to refine the medication label symbols according to themes and messages identified through qualitative triangulation mechanisms and data analysis techniques. A pre-post medication identification test was conducted in the second phase of the study to assess differences between standard labeling alone and the addition of the refined color-specific symbols. The pre-post test examined the impact of the added labels on participants' ability to accurately match their medication to the indication for which it was prescribed when placed in front of participants and then at a distance of two feet. Results Participants appreciated the addition of a visual aid on existing medication labels because it would not be necessary to learn a completely new system of labeling, and generally found the colors and symbols used in the proposed labeling system easy to understand and relevant. Concerns were raised about space constraints on medication bottles, having too much information on the bottle, and having to remember what the colors meant. Symbols and colors were modified if they were found unclear or inappropriate by focus group participants. Pre-post medication identification test results in a second set of participants demonstrated that the addition of the symbol label significantly improved the ability of participants to match their medication to the appropriate

  12. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  13. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    Science.gov (United States)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures. In this paper, a systematic approach towards establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters, but at different locations in a powder bed and in different laser scanning directions. The difference in melt track widths and depths captures the effect of changes in incident beam power distribution due to location and processing direction. The experimental results are used in combination with numerical model, and subjected to uncertainty and reliability analysis. Cumulative probability distribution functions obtained for melt track widths and depths are found to be coherent with observed experimental values. The technique is subsequently extended for reliability characterization of single layers produced on a thick powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity.

  14. Dynamic contrast-enhanced MRI improves accuracy for detecting focal splenic involvement in children and adolescents with Hodgkin disease

    Energy Technology Data Exchange (ETDEWEB)

    Punwani, Shonit; Taylor, Stuart A.; Halligan, Steve [University College London, Centre for Medical Imaging, London (United Kingdom); University College London Hospital, Department of Radiology, London (United Kingdom); Cheung, King Kenneth; Skipper, Nicholas [University College London, Centre for Medical Imaging, London (United Kingdom); Bell, Nichola; Humphries, Paul D. [University College London Hospital, Department of Radiology, London (United Kingdom); Bainbridge, Alan [University College London, Department of Medical Physics and Bioengineering, London (United Kingdom); Groves, Ashley M.; Hain, Sharon F.; Ben-Haim, Simona [University College Hospital, Institute of Nuclear Medicine, London (United Kingdom); Shankar, Ananth; Daw, Stephen [University College London Hospital, Department of Paediatrics, London (United Kingdom)

    2013-08-15

    Accurate assessment of splenic disease is important for staging Hodgkin lymphoma. The purpose of this study was to assess T2-weighted imaging with and without dynamic contrast-enhanced (DCE) MRI for evaluation of splenic Hodgkin disease. Thirty-one children with Hodgkin lymphoma underwent whole-body T2-weighted MRI with supplementary DCE splenic imaging, and whole-body PET-CT before and following chemotherapy. Two experienced nuclear medicine physicians derived a PET-CT reference standard for splenic disease, augmented by follow-up imaging. Unaware of the PET-CT, two experienced radiologists independently evaluated MRI exercising a locked sequential read paradigm (T2-weighted then DCE review) and recorded the presence/absence of splenic disease at each stage. Performance of each radiologist was determined prior to and following review of DCE-MRI. Incorrect MRI findings were ascribed to reader (lesion present on MRI but missed by reader) or technical (lesion not present on MRI) error. Seven children had splenic disease. Sensitivity/specificity of both radiologists for the detection of splenic involvement using T2-weighted images alone was 57%/100% and increased to 100%/100% with DCE-MRI. There were three instances of technical error on T2-weighted imaging; all lesions were visible on DCE-MRI. T2-weighted imaging when complemented by DCE-MRI imaging may improve evaluation of Hodgkin disease splenic involvement. (orig.)

  15. Improvement in laboratory diagnosis of wound botulism and tetanus among injecting illicit-drug users by use of real-time PCR assays for neurotoxin gene fragments.

    Science.gov (United States)

    Akbulut, D; Grant, K A; McLauchlin, J

    2005-09-01

    An upsurge in wound infections due to Clostridium botulinum and Clostridium tetani among users of illegal injected drugs (IDUs) occurred in the United Kingdom during 2003 and 2004. A real-time PCR assay was developed to detect a fragment of the neurotoxin gene of C. tetani (TeNT) and was used in conjunction with previously described assays for C. botulinum neurotoxin types A, B, and E (BoNTA, -B, and -E). The assays were sensitive, specific, rapid to perform, and applicable to investigating infections among IDUs using DNA extracted directly from wound tissue, as well as bacteria growing among mixed microflora in enrichment cultures and in pure culture on solid media. A combination of bioassay and PCR test results confirmed the clinical diagnosis in 10 of 25 cases of suspected botulism and two of five suspected cases of tetanus among IDUs. The PCR assays were in almost complete agreement with the conventional bioassays when considering results from different samples collected from the same patient. The replacement of bioassays by real-time PCR for the isolation and identification of both C. botulinum and C. tetani demonstrates a sensitivity and specificity similar to those of conventional approaches. However, the real-time PCR assays substantially improves the diagnostic process in terms of the speed of results and by the replacement of experimental animals. Recommendations are given for an improved strategy for the laboratory investigation of suspected wound botulism and tetanus among IDUs.

  16. Construct measurement quality improves predictive accuracy in violence risk assessment: an illustration using the personality assessment inventory.

    Science.gov (United States)

    Hendry, Melissa C; Douglas, Kevin S; Winter, Elizabeth A; Edens, John F

    2013-01-01

    Much of the risk assessment literature has focused on the predictive validity of risk assessment tools. However, these tools often comprise a list of risk factors that are themselves complex constructs, and focusing on the quality of measurement of individual risk factors may improve the predictive validity of the tools. The present study illustrates this concern using the Antisocial Features and Aggression scales of the Personality Assessment Inventory (Morey, 1991). In a sample of 1,545 prison inmates and offenders undergoing treatment for substance abuse (85% male), we evaluated (a) the factorial validity of the ANT and AGG scales, (b) the utility of original ANT and AGG scales and newly derived ANT and AGG scales for predicting antisocial outcomes (recidivism and institutional infractions), and (c) whether items with a stronger relationship to the underlying constructs (higher factor loadings) were in turn more strongly related to antisocial outcomes. Confirmatory factor analyses (CFAs) indicated that ANT and AGG items were not structured optimally in these data in terms of correspondence to the subscale structure identified in the PAI manual. Exploratory factor analyses were conducted on a random split-half of the sample to derive optimized alternative factor structures, and cross-validated in the second split-half using CFA. Four-factor models emerged for both the ANT and AGG scales, and, as predicted, the size of item factor loadings was associated with the strength with which items were associated with institutional infractions and community recidivism. This suggests that the quality by which a construct is measured is associated with its predictive strength. Implications for risk assessment are discussed.

  17. Colorimetric protein assay techniques.

    Science.gov (United States)

    Sapan, C V; Lundblad, R L; Price, N C

    1999-04-01

    There has been an increase in the number of colorimetric assay techniques for the determination of protein concentration over the past 20 years. This has resulted in a perceived increase in sensitivity and accuracy with the advent of new techniques. The present review considers these advances with emphasis on the potential use of such technologies in the assay of biopharmaceuticals. The techniques reviewed include Coomassie Blue G-250 dye binding (the Bradford assay), the Lowry assay, the bicinchoninic acid assay and the biuret assay. It is shown that each assay has advantages and disadvantages relative to sensitivity, ease of performance, acceptance in the literature, accuracy and reproducibility/coefficient of variation/laboratory-to-laboratory variation. A comparison of the use of several assays with the same sample population is presented. It is suggested that the most critical issue in the use of a chromogenic protein assay for the characterization of a biopharmaceutical is the selection of a standard for the calibration of the assay; it is crucial that the standard be representative of the sample. If it is not possible to match the standard with the sample from the perspective of protein composition, then it is preferable to use an assay that is not sensitive to the composition of the protein such as a micro-Kjeldahl technique, quantitative amino acid analysis or the biuret assay. In a complex mixture it might be inappropriate to focus on a general method of protein determination and much more informative to use specific methods relating to the protein(s) of particular interest, using either specific assays or antibody-based methods. The key point is that whatever method is adopted as the 'gold standard' for a given protein, this method needs to be used routinely for calibration.

  18. Development of an improved IP(1) assay for the characterization of 5-HT(2C) receptor ligands.

    Science.gov (United States)

    Zhang, Jean Y; Kowal, Dianne M; Nawoschik, Stanley P; Dunlop, John; Pausch, Mark H; Peri, Ravikumar

    2010-02-01

    The 5-hydroxytryptamine 2C (5-HT(2C)) receptor is a member of the serotonin 5-HT(2) subfamily of G-protein-coupled receptors signaling predominantly via the phospholipase C (PLC) pathway. Stimulation of phosphoinositide (PI) hydrolysis upon 5-HT(2C) receptor activation is traditionally assessed by measuring inositol monophosphate (IP(1)) using time-consuming and labor-intensive anion exchange radioactive assays. In this study, we have developed and optimized a cellular IP(1) assay using homogeneous time-resolved fluorescence (HTRF), a fluorescence resonance energy transfer (FRET)-based technology (Cisbio; Gif sur Yvette, France). The measurement is simple to carry out without the cumbersome steps associated with radioactive assays and may therefore be used as an alternative tool to evaluate PI hydrolysis activated by 5-HT(2C) agonists. In Chinese hamster ovary (CHO) cells stably expressing 5-HT(2C) receptors, characterization of 5-HT(2C) agonists with the HTRF platform revealed a rank order of potency (EC(50), nM) comparable to that from intracellular calcium mobilization studies measured by the fluorometric imaging plate reader (FLIPR). A similar rank order of potency was seen with conventional radioactive PI assay with the exception of 5-HT. Lastly, the new assay data correlated better with agonist-induced calcium responses in FLIPR (R(2) = 0.78) than with values determined by radioactive IP(1) method (R(2) = 0.64). Our study shows that the HTRF FRET-based assay detects IP(1) with good sensitivity and may be streamlined for high-throughput (HTS) applications.

  19. Ionospheric corrections estimation in a local GNSS permanent stations network: improvement of Code Point Positioning at sub-metric accuracy level

    Science.gov (United States)

    Brunini, C.; Crespi, M.; Mazzoni, A.

    2009-04-01

    It is well know that GNSS permanent networks for real-time positioning were mainly designed to generate and transmit products for RTK (or Network-RTK) positioning. In this context, RTK products are restricted to users equipped with geodetic-class receivers. This work is a first step toward using a local network of permanent GNSS stations to generate and transmit real time products that could remarkably improve positioning accuracy for C/A receiver users. A simple experiment was carried out based on 3 consecutive days of data from 3 permanent stations that belong to the RESNAP-GPS network (w3.uniroma1.it/resnap-gps), located at the Lazio Region (Central Italy) and managed by DITS-Area di Geodesia e Geomatica, Sapienza University of Rome. In the first step the RINEX files were corrected for the differential code biases according to IGS recommendations and then processed with Bernese 5.0 CODSPP module (single point positioning using code measurements), using IGS precise ephemeris and clocks. One position per epoch (every 30 seconds) was estimated for P1 and for the ionosphere free combination (P3). The accuracy obtained with the P3• combination for the vertical component, which ranged from -1 to +1 m, was taken as the reference for the following discussion. For P1 observations, the vertical coordinate errors showed a typical signature due to the ionospheric activity: higher errors for day-time (up to 5 m) and smaller ones for night-time (around 1.5 m). In order to improve the accuracy of the P1 solution, ionospheric corrections were estimated using the La Plata Ionospheric Model, based on the dual-frequency observations from the RESNAP-GPS network. Those corrections were applied to the RINEX files of a probing station located within the reference network. With this procedure, the vertical coordinate errors were reduced to the range from -0.8 to 0.8 m. This methodological approach shows the possibility to remarkably improve the real time positioning based on Code

  20. Improvement and optimization of a multiplex real-time reverse transcription polymerase chain reaction assay for the detection and typing of Vesicular stomatitis virus.

    Science.gov (United States)

    Hole, Kate; Velazquez-Salinas, Lauro; Velazques-Salinas, Lauro; Clavijo, Alfonso

    2010-05-01

    An improvement to a previously reported real-time reverse transcription polymerase chain reaction (real-time RT-PCR) assay for the detection of Vesicular stomatitis virus (VSV) is described. Results indicate that the new assay is capable of detecting a panel of genetically representative strains of VSV present in North, Central, and South America. The assay is specific for VSV and allows for simultaneous differentiation between Vesicular stomatitis Indiana virus and Vesicular stomatitis New Jersey virus. This real-time RT-PCR is able to detect current circulating strains of VSV and can be used for rapid diagnosis of VSV and differentiation of VSV from other vesicular diseases, such as foot-and-mouth disease.

  1. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  2. Predicting antimicrobial peptides with improved accuracy by incorporating the compositional, physico-chemical and structural features into Chou’s general PseAAC

    Science.gov (United States)

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Saini, Varsha; Rao, Atmakuri Ramakrishna

    2017-01-01

    Antimicrobial peptides (AMPs) are important components of the innate immune system that have been found to be effective against disease causing pathogens. Identification of AMPs through wet-lab experiment is expensive. Therefore, development of efficient computational tool is essential to identify the best candidate AMP prior to the in vitro experimentation. In this study, we made an attempt to develop a support vector machine (SVM) based computational approach for prediction of AMPs with improved accuracy. Initially, compositional, physico-chemical and structural features of the peptides were generated that were subsequently used as input in SVM for prediction of AMPs. The proposed approach achieved higher accuracy than several existing approaches, while compared using benchmark dataset. Based on the proposed approach, an online prediction server iAMPpred has also been developed to help the scientific community in predicting AMPs, which is freely accessible at http://cabgrid.res.in:8080/amppred/. The proposed approach is believed to supplement the tools and techniques that have been developed in the past for prediction of AMPs. PMID:28205576

  3. On Improving Accuracy of Finite-Element Solutions of the Effective-Mass Schrödinger Equation for Interdiffused Quantum Wells and Quantum Wires

    Science.gov (United States)

    Topalović, D. B.; Arsoski, V. V.; Pavlović, S.; Čukarić, N. A.; Tadić, M. Ž.; Peeters, F. M.

    2016-01-01

    We use the Galerkin approach and the finite-element method to numerically solve the effective-mass Schrödinger equation. The accuracy of the solution is explored as it varies with the range of the numerical domain. The model potentials are those of interdiffused semiconductor quantum wells and axially symmetric quantum wires. Also, the model of a linear harmonic oscillator is considered for comparison reasons. It is demonstrated that the absolute error of the electron ground state energy level exhibits a minimum at a certain domain range, which is thus considered to be optimal. This range is found to depend on the number of mesh nodes N approximately as α0 logeα1(α2N), where the values of the constants α0, α1, and α2 are determined by fitting the numerical data. And the optimal range is found to be a weak function of the diffusion length. Moreover, it was demonstrated that a domain range adaptation to the optimal value leads to substantial improvement of accuracy of the solution of the Schrödinger equation. Supported by the Ministry of Education, Science, and Technological Development of Serbia and the Flemish fund for Scientific Research (FWO Vlaanderen)

  4. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy.

    Science.gov (United States)

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-03-11

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.

  5. Improvement of the BALB/c-3T3 cell transformation assay: a tool for investigating cancer mechanisms and therapies.

    Science.gov (United States)

    Poburski, Doerte; Thierbach, René

    2016-01-01

    The identification of cancer preventive or therapeutic substances as well as carcinogenic risk assessment of chemicals is nowadays mostly dependent on animal studies. In vitro cell transformation assays mimic different stages of the in vivo neoplastic process and represent an excellent alternative to study carcinogenesis and therapeutic options. In the BALB/c-3T3 two-stage transformation assay cells are chemically transformed by treatment with MCA and TPA, along with the final Giemsa staining of morphological aberrant foci. In addition to the standard method we can show, that it is possible to apply other chemicals in parallel to identify potential preventive or therapeutic substances during the transformation process. Furthermore, we successfully combined the BALB/c cell transformation assay with several endpoint applications for protein analysis (immunoblot, subcellular fractionation and immunofluorescence) or energy parameter measurements (glucose and oxygen consumption) to elucidate cancer mechanisms in more detail. In our opinion the BALB/c cell transformation assay proves to be an excellent model to investigate alterations in key proteins or energy parameters during the different stages of transformation as well as therapeutic substances and their mode of action.

  6. Improved removal of ascorbate interference in the folin-ciocalteu assay of “total phenolic content”

    Science.gov (United States)

    The venerable Folin-Ciocalteu (F-C) assay for total phenolics can have severe limitations due to interference by ascorbic acid (AsA). For common fruit juices AsA interference can substantially exceed the magnitude of the total phenolic signal. Ascorbate oxidase (AO) has been a promising approach to ...

  7. Improved Folin-Ciocalteu assay of “total phenolic content” by removal of ascorbate and dehydroascorbate

    Science.gov (United States)

    The venerable and operationally simple Folin-Ciocalteu (F-C) assay for total phenolics can have severe limitations due to interference by ascorbic acid (AsA). For common fruit juices AsA interference can easily exceed the magnitude of the total phenolic signal itself. Ascorbate oxidase (AO) has been...

  8. Improved removal of ascorbate interference in the Folin-Ciocalteu assay of “total phenolic content"

    Science.gov (United States)

    The venerable Folin-Ciocalteu (F-C) assay for total phenolics can have severe limitations due to interference by ascorbic acid (AsA). For common fruit juices AsA interference can easily exceed the magnitude of the total phenolic signal itself. Ascorbate oxidase (AO) has been a promising approach to ...

  9. Improved HF183 quantitative real-time PCR assay for characterization of human fecal pollution in ambient surface water samples

    Science.gov (United States)

    Real-time quantitative PCR assays that target the human-associated HF183 bacterial cluster have been found to be some of the top performing methods for the characterization of human fecal pollution in ambient surface waters. The United States Environmental Protection Agency is planning to conduct a ...

  10. The practical method of improve earthquake forecast accuracy by MSDP software%MSDP软件提高地震速报质量

    Institute of Scientific and Technical Information of China (English)

    苏莉华; 赵晖; 李源; 魏玉霞

    2012-01-01

    Select the records of Henan digital seismic network within the network and outside the network (the sidelines within 100 km) of seismic events from 2008 to 2011. Analysis and comparison those records by MSDP software, and coordinate with the daily experience, generalize the practical method of improve earthquake forecast accuracy.%选取2008-2011年河南数字地震台网记录的网内和网外(边线外100 km以内)的地震事件,运用MSDP软件对这些震例进行实际分析对比,再结合日常的工作经验,从而归纳出提高地震速报质量的实用方法.

  11. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp..

    Directory of Open Access Journals (Sweden)

    Cameron R Turner

    Full Text Available Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp., an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  12. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp.).

    Science.gov (United States)

    Turner, Cameron R; Miller, Derryl J; Coyne, Kathryn J; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  13. IMPROVEMENT OF ACCURACY OF RADIATIVE HEAT TRANSFER DIFFERENTIAL APPROXIMATION METHOD FOR MULTI DIMENSIONAL SYSTEMS BY MEANS OF AUTO-ADAPTABLE BOUNDARY CONDITIONS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The  efficiency  of  the  named  method  is  proved  for  the  test  2D-systems.  Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.

  14. 应用 ICD 编码提高入出院诊断符合率%Improve the Accuracy of Diagnosis Coincidence Rate by ICD

    Institute of Scientific and Technical Information of China (English)

    王志国; 朱佳怀; 邹郢; 黄丽丽; 朱智明

    2015-01-01

    Objective The aim is to improve the accuracy of diagnosis coincidence rate with ICD assis -tance judgment.Methods The data of diagnosis were extracted from Hospital Information System .The difference between physician judgment and ICD code judgment was compared , the inconsistent cases were analyzed .Results The coincidence rate judged by ICD code was higher than that judged by physician (99.21% vs.54.31%,P <0.0001).Of all the 2145 cases, 70 cases judged by physician were wrong .Conclusion With the assistance judg-ment of ICD can improve the accuracy of diagnosis coincidence rate .%目的:以国际疾病分类编码(ICD10)辅助判定提高患者出入院诊断符合率的准确性。方法从“军字一号”数据库中提取诊断符合相关数据,比较医师判定和 ICD10判定入院诊断符合的一致性,并对两种判定不一致的病例进行分析。结果2145例次出院患者的 ICD 编码诊断符合率判定(99.21%)明显高于医师判定(54.31%,P <0.0001),有70例为医师判定错误。结论以 ICD 编码进行辅助判定,有助于提高患者入出院诊断符合率的准确性,提高诊断质量。

  15. Investigation of Multi-Functional Ferroelectric Nanorod/Carbon Nanotube/Polymer Composites and Shape Memory Alloy Treatment for Vibration Control of Fire Control System to Improve Firing Accuracy

    Science.gov (United States)

    2015-08-10

    measurement of certain material properties. 44 | P a g e Chemicals , glass equipment, homogenizer, hydrothermal chemical reactors all required to produce...Memory Alloy Treatment for Vibration Control of Fire Control System to Improve Firing Accuracy The views, opinions and/or findings contained in this...of Fire Control System to Improve Firing Accuracy Report Title We have created and tested several sensors: one is PANI/MWCNT composite; a second

  16. Diagnostic accuracy of real-time PCR assays targeting 16S rRNA and lipL32 genes for human leptospirosis in Thailand: a case-control study.

    Directory of Open Access Journals (Sweden)

    Janjira Thaipadungpanit

    Full Text Available BACKGROUND: Rapid PCR-based tests for the diagnosis of leptospirosis can provide information that contributes towards early patient management, but these have not been adopted in Thailand. Here, we compare the diagnostic sensitivity and specificity of two real-time PCR assays targeting rrs or lipL32 for the diagnosis of leptospirosis in northeast Thailand. METHODS/PRINCIPAL FINDINGS: A case-control study of 266 patients (133 cases of leptospirosis and 133 controls was constructed to evaluate the diagnostic sensitivity and specificity (DSe & DSp of both PCR assays. The median duration of illness prior to admission of cases was 4 days (IQR 2-5 days; range 1-12 days. DSe and DSp were determined using positive culture and/or microscopic agglutination test (MAT as the gold standard. The DSe was higher for the rrs assay than the lipL32 assay (56%, (95% CI 47-64% versus 43%, (95% CI 34-52%, p<0.001. No cases were positive for the lipL32 assay alone. There was borderline evidence to suggest that the DSp of the rrs assay was lower than the lipL32 assay (90% (95% CI 83-94% versus 93%, (95%CI 88-97%, p = 0.06. Nine controls gave positive reactions for both assays and 5 controls gave a positive reaction for the rrs assay alone. The DSe of the rrs and lipL32 assays were high in the subgroup of 39 patients who were culture positive for Leptospira spp. (95% and 87%, respectively, p = 0.25. CONCLUSIONS/SIGNIFICANCE: Early detection of Leptospira using PCR is possible for more than half of patients presenting with leptospirosis and could contribute to individual patient care.

  17. Network time transfer accuracy improvement with Kalman filter method%提高网络授时精度的Kalman滤波方法

    Institute of Scientific and Technical Information of China (English)

    邢开亮; 尹义蓉; 黄永华; 高勇

    2011-01-01

    In communication networks, the transmission of coordinated universal time ( UTC) is always influenced by delay, hence the accuracy is reduced. In this paper, a correction method based on Kalman filter has been proposed. The delay of networks is divided into three parts, which include fixed delay, jitter delay and burst delay.Kalman filter is employed to reduce the jitter delay, while a tracking door is used for burst delay reduction. Therefore , the delay of networks wiU approach to the fixed delay and the accuracy is improved. In order to verify the accuracy improvement effect of Kalman filter, the SimEvents expansion module in MATLAB is used to build a terminalto-terminal computer network transmission model. When the algorithm works steadily, the accuracy of the estimated time delay can be improved by 100 times compared with the measured time delay. A real-time test platform was designed in LabVIEW, which embeds the MATLAB program. Real-time communication network delay was obtained and then filtered. Results indicate that the proposed correction method is effective through the verification of the actual data in both wireless and cable networks.%在通信网络中传输标准时间往往受到时延很大的影响,造成精度不高.为了提高网络授时精度,提出一种基于卡尔曼滤波器的修正方法,将网络时延分解为固有延时、抖动时延、突发时延3个部分,利用Kalman滤波器减小抖动时延,利用跟踪门消除突发时延,从而使网络时延更加逼近固有时延,有利于提高授时精度.在MATLAB中运用SimEvents扩展模块,搭建一个端到端的计算机网络传输模型,验证了卡尔曼滤波对网络时延精度提高的效果,算法在稳定后,时延的估计值与测量值相比,精度可以提高大约100倍.在此基础上,利用LabVIEW设计了一个实时测试平台,嵌入MATLAB程序,实时获取通信网络时延并进行滤波处理.通过实际有线和无线网络测量的时延数据的验

  18. Revealing the essentiality of multiple archaeal pcna genes using a mutant propagation assay based on an improved knockout method

    DEFF Research Database (Denmark)

    Zhang, Changyi; Guo, Li; Deng, Ling;

    2010-01-01

    Organisms belonging to the Crenarchaeota lineage contain three PCNA subunits (proliferating cell nuclear antigen) while those in Euryarchaeota have only one as for Eukarya. To study the mechanism of archaeal sliding clamps, we sought to generate knockouts for each pcna gene in Sulfolobus islandic...... genes are absolutely required for host cell viability. Because the only prerequisite for this assay is to generate a MID transformant, this approach can be applied generally to any microorganisms proficient in homologous recombination....

  19. Improved diagnostic PCR assay for Actinobacillus pleuropneumoniae based on the nucleotide sequence of an outer membrane lipoprotein

    DEFF Research Database (Denmark)

    Gram, Trine; Ahrens, Peter

    1998-01-01

    The gene (omlA) coding for an outer membrane protein of Actinobacillus pleuropneumoniae serotypes 1 and 5 has been described earlier and has formed the basis for development of a specific PCR assay, The corresponding regions of all 12 A. pleuropneumoniae reference strains of biovar 1 were sequenc...... and sensitivity of this PCR compared to those of culture suggest the use of this PCR for routine identification of A. pleuropneumoniae.......The gene (omlA) coding for an outer membrane protein of Actinobacillus pleuropneumoniae serotypes 1 and 5 has been described earlier and has formed the basis for development of a specific PCR assay, The corresponding regions of all 12 A. pleuropneumoniae reference strains of biovar 1 were sequenced...... species related to A. pleuropneumoniae or isolated from pigs were assayed. They were all found negative in the PCR, as were tonsil cultures from 50 pigs of an A. pleuropneumoniae-negative herd. The sensitivity assessed by agarose gel analysis of the PCR product was 10(2) CFU/PCR test tube. The specificity...

  20. Improved Method for Rotational Accuracy of Flexure Hinges%柔性铰链的转动精度改进法

    Institute of Scientific and Technical Information of China (English)

    裴旭; 宗光华; 于靖军

    2013-01-01

    传统切口型铰链在转动时存在转动中心的漂移,如果柔性铰链转动角度不是很大,铰链的转动中心在沿轴向的位移远小于在垂直轴线方向的位移时,则可对柔性铰链转动中心漂移模型进行简化.提出了将连接两刚体的两个切口型铰链正交放置且让转动中心轴线重合,以便抑制柔性铰链转动中心漂移的设计方法.引σ了平面虚拟转动中心运动机构,实现了铰链之间的虚拟交叉约束.使用此方法可以对不同切口形状的铰链进行改进,以提高铰链的转动精度.设计了圆弧切口和直角切口的组合铰链,通过有限元仿真与传统形式的铰链进行了比较,仿真结果证明了这种方法的有效性.%The traditional incision type hinge's rotation pivot had center-shift, when it rotated. When the rotation angle of a flexure hinge was small, the axial displacement of the hinge's rotation pivot was far less than the perpendicular displacement. The model of center-shift of flexure hinges was simplified. A way that cross two hinges which connect two rigid body and put the hinges' rotation center on a common axis to restrain the center-shift was brought forward. The planar VCM (Virtual Center of Motion) mechanism was utilized to realize a virtual cross restriction of hinges. By this way, different hinges could be improved, to improve the rotation accuracy. Two flexure structures which respectively consist of leaf and circle hinges were constructed, and compared with traditional flexure hinges by the finite element analysis software. The analysis results show that the method can improve the rotation accuracy.

  1. Improving the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins by guided-learning through a two-layer neural network.

    Science.gov (United States)

    Faraggi, Eshel; Xue, Bin; Zhou, Yaoqi

    2009-03-01

    This article attempts to increase the prediction accuracy of residue solvent accessibility and real-value backbone torsion angles of proteins through improved learning. Most methods developed for improving the backpropagation algorithm of artificial neural networks are limited to small neural networks. Here, we introduce a guided-learning method suitable for networks of any size. The method employs a part of the weights for guiding and the other part for training and optimization. We demonstrate this technique by predicting residue solvent accessibility and real-value backbone torsion angles of proteins. In this application, the guiding factor is designed to satisfy the intuitive condition that for most residues, the contribution of a residue to the structural properties of another residue is smaller for greater separation in the protein-sequence distance between the two residues. We show that the guided-learning method makes a 2-4% reduction in 10-fold cross-validated mean absolute errors (MAE) for predicting residue solvent accessibility and backbone torsion angles, regardless of the size of database, the number of hidden layers and the size of input windows. This together with introduction of two-layer neural network with a bipolar activation function leads to a new method that has a MAE of 0.11 for residue solvent accessibility, 36 degrees for psi, and 22 degrees for phi. The method is available as a Real-SPINE 3.0 server in http://sparks.informatics.iupui.edu.

  2. Improving protein identification sensitivity by combining MS and MS/MS information for shotgun proteomics using LTQ-Orbitrap high mass accuracy data.

    Science.gov (United States)

    Lu, Bingwen; Motoyama, Akira; Ruse, Cristian; Venable, John; Yates, John R

    2008-03-15

    We investigated and compared three approaches for shotgun protein identification by combining MS and MS/MS information using LTQ-Orbitrap high mass accuracy data. In the first approach, we employed a unique mass identifier method where MS peaks matched to peptides predicted from proteins identified from an MS/MS database search are first subtracted before using the MS peaks as unique mass identifiers for protein identification. In the second method, we used an accurate mass and time tag method by building a potential mass and retention time database from previous MudPIT analyses. For the third method, we used a peptide mass fingerprinting-like approach in combination with a randomized database for protein identification. We show that we can improve protein identification sensitivity for low-abundance proteins by combining MS and MS/MS information. Furthermore, "one-hit wonders" from MS/MS database searching can be further substantiated by MS information and the approach improves the identification of low-abundance proteins. The advantages and disadvantages for the three approaches are then discussed.

  3. The Extension of the German CERAD Neuropsychological Assessment Battery with Tests Assessing Subcortical, Executive and Frontal Functions Improves Accuracy in Dementia Diagnosis

    Directory of Open Access Journals (Sweden)

    Nicole S. Schmid

    2014-08-01

    Full Text Available Background/Aims: Alzheimer's disease (AD is the most common form of dementia. Neuropsychological assessment of individuals with AD primarily focuses on tests of cortical functioning. However, in clinical practice, the underlying pathologies of dementia are unknown, and a focus on cortical functioning may neglect other domains of cognition, including subcortical and executive functioning. The current study aimed to improve the diagnostic discrimination ability of the Consortium to Establish a Registry for Alzheimer's Disease - Neuropsychological Assessment Battery (CERAD-NAB by adding three tests of executive functioning and mental speed (Trail Making Tests A and B, S-Words. Methods: Logistic regression analyses of 594 normal controls (NC, 326 patients with mild AD and 224 patients with other types of dementia (OD were carried out, and the area under the curve values were compared to those of CERAD-NAB alone. Results: All comparisons except AD-OD (65.5% showed excellent classification rates (NC-AD: 92.7%; NC-OD: 89.0%; NC-all patients: 91.0% and a superior diagnostic accuracy of the extended version. Conclusion: Our findings suggest that these three tests provide a sensible addition to the CERAD-NAB and can improve neuropsychological diagnosis of dementia.

  4. Urinary Biomarker Panel to Improve Accuracy in Predicting Prostate Biopsy Result in Chinese Men with PSA 4–10 ng/mL

    Science.gov (United States)

    Zhou, Yongqiang; Li, Yun; Li, Xiangnan

    2017-01-01

    This study aims to evaluate the effectiveness and clinical performance of a panel of urinary biomarkers to diagnose prostate cancer (PCa) in Chinese men with PSA levels between 4 and 10 ng/mL. A total of 122 patients with PSA levels between 4 and 10 ng/mL who underwent consecutive prostate biopsy at three hospitals in China were recruited. First-catch urine samples were collected after an attentive prostate massage. Urinary mRNA levels were measured by quantitative real-time polymerase chain reaction (qRT-PCR). The predictive accuracy of these biomarkers and prediction models was assessed by the area under the curve (AUC) of the receiver-operating characteristic (ROC) curve. The diagnostic accuracy of PCA3, PSGR, and MALAT-1 was superior to that of PSA. PCA3 performed best, with an AUC of 0.734 (95% CI: 0.641, 0.828) followed by MALAT-1 with an AUC of 0.727 (95% CI: 0.625, 0.829) and PSGR with an AUC of 0.666 (95% CI: 0.575, 0.749). The diagnostic panel with age, prostate volume, % fPSA, PCA3 score, PSGR score, and MALAT-1 score yielded an AUC of 0.857 (95% CI: 0.780, 0.933). At a threshold probability of 20%, 47.2% of unnecessary biopsies may be avoided whereas only 6.2% of PCa cases may be missed. This urinary panel may improve the current diagnostic modality in Chinese men with PSA levels between 4 and 10 ng/mL.

  5. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    Science.gov (United States)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  6. A novel computer-assisted image analysis of [{sup 123}I]{beta}-CIT SPECT images improves the diagnostic accuracy of parkinsonian disorders

    Energy Technology Data Exchange (ETDEWEB)

    Goebel, Georg [Innsbruck Medical University, Department of Medical Statistics, Informatics and Health Economics, Innsbruck (Austria); Seppi, Klaus; Wenning, Gregor K.; Poewe, Werner; Scherfler, Christoph [Innsbruck Medical University, Department of Neurology, Innsbruck (Austria); Donnemiller, Eveline; Warwitz, Boris; Virgolini, Irene [Innsbruck Medical University, Department of Nuclear Medicine, Innsbruck (Austria)

    2011-04-15

    The purpose of this study was to develop an observer-independent algorithm for the correct classification of dopamine transporter SPECT images as Parkinson's disease (PD), multiple system atrophy parkinson variant (MSA-P), progressive supranuclear palsy (PSP) or normal. A total of 60 subjects with clinically probable PD (n = 15), MSA-P (n = 15) and PSP (n = 15), and 15 age-matched healthy volunteers, were studied with the dopamine transporter ligand [{sup 123}I]{beta}-CIT. Parametric images of the specific-to-nondisplaceable equilibrium partition coefficient (BP{sub ND}) were generated. Following a voxel-wise ANOVA, cut-off values were calculated from the voxel values of the resulting six post-hoc t-test maps. The percentages of the volume of an individual BP{sub ND} image remaining below and above the cut-off values were determined. The higher percentage of image volume from all six cut-off matrices was used to classify an individual's image. For validation, the algorithm was compared to a conventional region of interest analysis. The predictive diagnostic accuracy of the algorithm in the correct assignment of a [{sup 123}I]{beta}-CIT SPECT image was 83.3% and increased to 93.3% on merging the MSA-P and PSP groups. In contrast the multinomial logistic regression of mean region of interest values of the caudate, putamen and midbrain revealed a diagnostic accuracy of 71.7%. In contrast to a rater-driven approach, this novel method was superior in classifying [{sup 123}I]{beta}-CIT-SPECT images as one of four diagnostic entities. In combination with the investigator-driven visual assessment of SPECT images, this clinical decision support tool would help to improve the diagnostic yield of [{sup 123}I]{beta}-CIT SPECT in patients presenting with parkinsonism at their initial visit. (orig.)

  7. Urinary Biomarker Panel to Improve Accuracy in Predicting Prostate Biopsy Result in Chinese Men with PSA 4–10 ng/mL

    Directory of Open Access Journals (Sweden)

    Yongqiang Zhou

    2017-01-01

    Full Text Available This study aims to evaluate the effectiveness and clinical performance of a panel of urinary biomarkers to diagnose prostate cancer (PCa in Chinese men with PSA levels between 4 and 10 ng/mL. A total of 122 patients with PSA levels between 4 and 10 ng/mL who underwent consecutive prostate biopsy at three hospitals in China were recruited. First-catch urine samples were collected after an attentive prostate massage. Urinary mRNA levels were measured by quantitative real-time polymerase chain reaction (qRT-PCR. The predictive accuracy of these biomarkers and prediction models was assessed by the area under the curve (AUC of the receiver-operating characteristic (ROC curve. The diagnostic accuracy of PCA3, PSGR, and MALAT-1 was superior to that of PSA. PCA3 performed best, with an AUC of 0.734 (95% CI: 0.641, 0.828 followed by MALAT-1 with an AUC of 0.727 (95% CI: 0.625, 0.829 and PSGR with an AUC of 0.666 (95% CI: 0.575, 0.749. The diagnostic panel with age, prostate volume, % fPSA, PCA3 score, PSGR score, and MALAT-1 score yielded an AUC of 0.857 (95% CI: 0.780, 0.933. At a threshold probability of 20%, 47.2% of unnecessary biopsies may be avoided whereas only 6.2% of PCa cases may be missed. This urinary panel may improve the current diagnostic modality in Chinese men with PSA levels between 4 and 10 ng/mL.

  8. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  9. Combining pseudo dinucleotide composition with the Z curve method to improve the accuracy of predicting DNA elements: a case study in recombination spots.

    Science.gov (United States)

    Dong, Chuan; Yuan, Ya-Zhou; Zhang, Fa-Zhan; Hua, Hong-Li; Ye, Yuan-Nong; Labena, Abraham Alemayehu; Lin, Hao; Chen, Wei; Guo, Feng-Biao

    2016-08-16

    Pseudo dinucleotide composition (PseDNC) and Z curve showed excellent performance in the classification issues of nucleotide sequences in bioinformatics. Inspired by the principle of Z curve theory, we improved PseDNC to give the phase-specific PseDNC (psPseDNC). In this study, we used the prediction of recombination spots as a case to illustrate the capability of psPseDNC and also PseDNC fused with Z curve theory based on a novel machine learning method named large margin distribution machine (LDM). We verified that combining the two widely used approaches could generate better performance compared to only using PseDNC with a support vector machine based (SVM-based) model. The best Mathew's correlation coefficient (MCC) achieved by our LDM-based model was 0.7037 through the rigorous jackknife test and improved by ∼6.6%, ∼3.2%, and ∼2.4% compared with three previous studies. Similarly, the accuracy was improved by 3.2% compared with our previous iRSpot-PseDNC web server through an independent data test. These results demonstrate that the joint use of PseDNC and Z curve enhances performance and can extract more information from a biological sequence. To facilitate research in this area, we constructed a user-friendly web server for predicting hot/cold spots, HcsPredictor, which can be freely accessed from . In summary, we provided a united algorithm by integrating Z curve with PseDNC. We hope this united algorithm could be extended to other classification issues in DNA elements.

  10. Significant Improvement of Puncture Accuracy and Fluoroscopy Reduction in Percutaneous Transforaminal Endoscopic Discectomy With Novel Lumbar Location System: Preliminary Report of Prospective Hello Study.

    Science.gov (United States)

    Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng

    2015-12-01

    Prospective nonrandomized control study.The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED).Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory.The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic times, and the secondary endpoint was location time and operation time.A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture times were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P HELLO system is accurate preoperative location and definite trajectory. This preliminary report indicated that the HELLO system significantly improves the puncture accuracy of PTED and reduces the fluoroscopic times, preoperative location time, as well as operation time. (ChiCTR-ICR-15006730).

  11. An improved method for the isolation of rat alveolar type II lung cells: Use in the Comet assay to determine DNA damage induced by cigarette smoke.

    Science.gov (United States)

    Dalrymple, Annette; Ordoñez, Patricia; Thorne, David; Dillon, Debbie; Meredith, Clive

    2015-06-01

    Smoking is a cause of serious diseases, including lung cancer, emphysema, chronic bronchitis and heart disease. DNA damage is thought to be one of the mechanisms by which cigarette smoke (CS) initiates disease in the lung. Indeed, CS induced DNA damage can be measured in vitro and in vivo. The potential of the Comet assay to measure DNA damage in isolated rat lung alveolar type II epithelial cells (AEC II) was explored as a means to include a genotoxicity end-point in rodent sub-chronic inhalation studies. In this study, published AEC II isolation methods were improved to yield viable cells suitable for use in the Comet assay. The improved method reduced the level of basal DNA damage and DNA repair in isolated AEC II. CS induced DNA damage could also be quantified in isolated cells following a single or 5 days CS exposure. In conclusion, the Comet assay has the potential to determine CS or other aerosol induced DNA damage in AEC II isolated from rodents used in sub-chronic inhalation studies.

  12. Fast T2 Mapping With Improved Accuracy Using Undersampled Spin-Echo MRI and Model-Based Reconstructions With a Generating Function

    Science.gov (United States)

    Petrovic, Andreas; Uecker, Martin; Knoll, Florian; Frahm, Jens

    2015-01-01

    A model-based reconstruction technique for accelerated T2 mapping with improved accuracy is proposed using under-sampled Cartesian spin-echo magnetic resonance imaging (MRI) data. The technique employs an advanced signal model for T2 relaxation that accounts for contributions from indirect echoes in a train of multiple spin echoes. An iterative solution of the nonlinear inverse reconstruction problem directly estimates spin-density and T2 maps from undersampled raw data. The algorithm is validated for simulated data as well as phantom and human brain MRI at 3T. The performance of the advanced model is compared to conventional pixel-based fitting of echo-time images from fully sampled data. The proposed method yields more accurate T2 values than the mono-exponential model and allows for retrospective under-sampling factors of at least 6. Although limitations are observed for very long T2 relaxation times, respective reconstruction problems may be overcome by a gradient dampening approach. The analytical gradient of the utilized cost function is included as Appendix. The source code is made available to the community. PMID:24988592

  13. Artificial Neural Network-Based Constitutive Relationship of Inconel 718 Superalloy Construction and Its Application in Accuracy Improvement of Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Junya Lv

    2017-01-01

    Full Text Available The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which play critical roles in process design and optimization. In this investigation, the true stress-strain data of an Inconel 718 superalloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1153–1353 K and strain rate range of 0.01–10 s−1 on a Gleeble 3500 testing machine (DSI, St. Paul, DE, USA. Then the constitutive relationship was modeled by an optimally-constructed and well-trained back-propagation artificial neural network (ANN. The evaluation of the ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of Inconel 718 superalloy. Consequently, the developed ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the continuous mapping relationship for temperature, strain rate, strain and stress. Finally, the constructed ANN was implanted in a finite element solver though the interface of “URPFLO” subroutine to simulate the isothermal compression tests. The results show that the integration of finite element method with ANN model can significantly promote the accuracy improvement of numerical simulations for hot forming processes.

  14. Fast T2 Mapping with Improved Accuracy Using Undersampled Spin-echo MRI and Model-based Reconstructions with a Generating Function

    CERN Document Server

    Sumpf, Tilman J; Uecker, Martin; Knoll, Florian; Frahm, Jens

    2014-01-01

    A model-based reconstruction technique for accelerated T2 mapping with improved accuracy is proposed using undersampled Cartesian spin-echo MRI data. The technique employs an advanced signal model for T2 relaxation that accounts for contributions from indirect echoes in a train of multiple spin echoes. An iterative solution of the nonlinear inverse reconstruction problem directly estimates spin-density and T2 maps from undersampled raw data. The algorithm is validated for simulated data as well as phantom and human brain MRI at 3 T. The performance of the advanced model is compared to conventional pixel-based fitting of echo-time images from fully sampled data. The proposed method yields more accurate T2 values than the mono-exponential model and allows for undersampling factors of at least 6. Although limitations are observed for very long T2 relaxation times, respective reconstruction problems may be overcome by a gradient dampening approach. The analytical gradient of the utilized cost function is included...

  15. Improvement on the competitive binding assay for the measurement of cyclic AMP by using ammonium sulphate precipitation.

    Science.gov (United States)

    Santa-Coloma, T A; Bley, M A; Charreau, E H

    1987-08-01

    The protein-binding assay developed by Brown, Albano, Ekins, Sgherzi & Tampion [(1971) Biochem. J. 121, 561-562] and Brown, Ekins & Albano [(1972) Adv. Cyclic Nucleotide Res. 2, 25-40] was modified by using precipitation with (NH4)2SO4 of the protein-cyclic AMP complex instead of adsorption of the free nucleotide on charcoal. The half-life of the protein-cyclic AMP complex obtained in the presence of charcoal was lower than that of the (NH4)2SO4-precipitated complex. In consequence, owing to the great stability of the precipitated protein-cyclic AMP complex, this method allows more accurate and reproducible determinations.

  16. Effect of N-acetylcysteine on the accuracy of the prothrombin time assay of plasma coagulation factor II plus VII plus X activity in subjects infused with the drug. Influence of time and temperature

    DEFF Research Database (Denmark)

    Thorsen, S.; Teisner, A.; Jensen, S.A.;

    2009-01-01

    Objectives: The prothrombin time (PT) assay of factor II+VII+X activity is an important predictor of liver damage in paracetamol poisoned patients. It complicates interpretation of results that the antidote, acetylcysteine (NAC) depresses this activity. The aim was to investigate if NAC influences...

  17. Effect of N-acetylcysteine on the accuracy of the prothrombin time assay of plasma coagulation factor II+VII+X activity in subjects infused with the drug. Influence of time and temperature

    DEFF Research Database (Denmark)

    Thorsen, Sixtus; Teisner, Ane; Jensen, Søren Astrup;

    2009-01-01

    OBJECTIVES: The prothrombin time (PT) assay of factor II+VII+X activity is an important predictor of liver damage in paracetamol poisoned patients. It complicates interpretation of results that the antidote, acetylcysteine (NAC) depresses this activity. The aim was to investigate if NAC influences...

  18. Stage-specific adhesion of Leishmania promastigotes to sand fly midguts assessed using an improved comparative binding assay.

    Directory of Open Access Journals (Sweden)

    Raymond Wilson

    Full Text Available BACKGROUND: The binding of Leishmania promastigotes to the midgut epithelium is regarded as an essential part of the life-cycle in the sand fly vector, enabling the parasites to persist beyond the initial blood meal phase and establish the infection. However, the precise nature of the promastigote stage(s that mediate binding is not fully understood. METHODOLOGY/PRINCIPAL FINDINGS: To address this issue we have developed an in vitro gut binding assay in which two promastigote populations are labelled with different fluorescent dyes and compete for binding to dissected sand fly midguts. Binding of procyclic, nectomonad, leptomonad and metacyclic promastigotes of Leishmania infantum and L. mexicana to the midguts of blood-fed, female Lutzomyia longipalpis was investigated. The results show that procyclic and metacyclic promastigotes do not bind to the midgut epithelium in significant numbers, whereas nectomonad and leptomonad promastigotes both bind strongly and in similar numbers. The assay was then used to compare the binding of a range of different parasite species (L. infantum, L. mexicana, L. braziliensis, L. major, L. tropica to guts dissected from various sand flies (Lu. longipalpis, Phlebotomus papatasi, P. sergenti. The results of these comparisons were in many cases in line with expectations, the natural parasite binding most effectively to its natural vector, and no examples were found where a parasite was unable to bind to its natural vector. However, there were interesting exceptions: L. major and L. tropica being able to bind to Lu. longipalpis better than L. infantum; L. braziliensis was able to bind to P. papatasi as well as L. major; and significant binding of L. major to P. sergenti and L. tropica to P. papatasi was observed. CONCLUSIONS/SIGNIFICANCE: The results demonstrate that Leishmania gut binding is strictly stage-dependent, is a property of those forms found in the middle phase of development (nectomonad and leptomonad

  19. The cytotoxicity of polycationic iron oxide nanoparticles: Common endpoint assays and alternative approaches for improved understanding of cellular response mechanism

    Directory of Open Access Journals (Sweden)

    Hoskins Clare

    2012-04-01

    Full Text Available Abstract Background Iron oxide magnetic nanoparticles (MNP's have an increasing number of biomedical applications. As such in vitro characterisation is essential to ensure the bio-safety of these particles. Little is known on the cellular interaction or effect on membrane integrity upon exposure to these MNPs. Here we synthesised Fe3O4 and surface coated with poly(ethylenimine (PEI and poly(ethylene glycol (PEG to achieve particles of varying surface positive charges and used them as model MNP's to evaluate the relative utility and limitations of cellular assays commonly applied for nanotoxicity assessment. An alternative approach, atomic force microscopy (AFM, was explored for the analysis of membrane structure and cell morphology upon interacting with the MNPs. The particles were tested in vitro on human SH-SY5Y, MCF-7 and U937 cell lines for reactive oxygen species (ROS production and lipid peroxidation (LPO, LDH leakage and their overall cytotoxic effect. These results were compared with AFM topography imaging carried out on fixed cell lines. Results Successful particle synthesis and coating were characterised using FTIR, PCS, TEM and ICP. The particle size from TEM was 30 nm (−16.9 mV which increased to 40 nm (+55.6 mV upon coating with PEI and subsequently 50 nm (+31.2 mV with PEG coating. Both particles showed excellent stability not only at neutral pH but also in acidic environment of pH 4.6 in the presence of sodium citrate. The higher surface charge MNP-PEI resulted in increased cytotoxic effect and ROS production on all cell lines compared with the MNP-PEI-PEG. In general the effect on the cell membrane integrity was observed only in SH-SY5Y and MCF-7 cells by MNP-PEI determined by LDH leakage and LPO production. AFM topography images showed consistently that both the highly charged MNP-PEI and the less charged MNP-PEI-PEG caused cell morphology changes possibly due to membrane disruption and cytoskeleton remodelling. Conclusions

  20. Improved dose calculation accuracy for low energy brachytherapy by optimizing dual energy CT imaging protocols for noise reduction using sinogram affirmed iterative reconstruction.

    Science.gov (United States)

    Landry, Guillaume; Gaudreault, Mathieu; van Elmpt, Wouter; Wildberger, Joachim E; Verhaegen, Frank

    2016-03-01

    The goal of this study was to evaluate the noise reduction achievable from dual energy computed tomography (CT) imaging (DECT) using filtered backprojection (FBP) and iterative image reconstruction algorithms combined with increased imaging exposure. We evaluated the data in the context of imaging for brachytherapy dose calculation, where accurate quantification of electron density ρe and effective atomic number Zeff is beneficial. A dual source CT scanner was used to scan a phantom containing tissue mimicking inserts. DECT scans were acquired at 80 kVp/140Sn kVp (where Sn stands for tin filtration) and 100 kVp/140Sn kVp, using the same values of the CT dose index CTDIvol for both settings as a measure for the radiation imaging exposure. Four CTDIvol levels were investigated. Images were reconstructed using FBP and sinogram affirmed iterative reconstruction (SAFIRE) with strength 1,3 and 5. From DECT scans two material quantities were derived, Zeff and ρe. DECT images were used to assign material types and the amount of improperly assigned voxels was quantified for each protocol. The dosimetric impact of improperly assigned voxels was evaluated with Geant4 Monte Carlo (MC) dose calculations for an (125)I source in numerical phantoms. Standard deviations for Zeff and ρe were reduced up to a factor ∼2 when using SAFIRE with strength 5 compared to FBP. Standard deviations on Zeff and ρe as low as 0.15 and 0.006 were achieved for the muscle insert representing typical soft tissue using a CTDIvol of 40 mGy and 3mm slice thickness. Dose calculation accuracy was generally improved when using SAFIRE. Mean (maximum absolute) dose errors of up to 1.3% (21%) with FBP were reduced to less than 1% (6%) with SAFIRE at a CTDIvol of 10 mGy. Using a CTDIvol of 40mGy and SAFIRE yielded mean dose calculation errors of the order of 0.6% which was the MC dose calculation precision in this study and no error was larger than ±2.5% as opposed to errors of up to -4% with FPB. This

  1. Quantitation of minimal disease levels in chronic lymphocytic leukemia using a sensitive flow cytometric assay improves the prediction of outcome and can be used to optimize therapy.

    Science.gov (United States)

    Rawstron, A C; Kennedy, B; Evans, P A; Davies, F E; Richards, S J; Haynes, A P; Russell, N H; Hale, G; Morgan, G J; Jack, A S; Hillmen, P

    2001-07-01

    Previous studies have suggested that the level of residual disease at the end of therapy predicts outcome in chronic lymphocytic leukemia (CLL). However, available methods for detecting CLL cells are either insensitive or not routinely applicable. A flow cytometric assay was developed that can differentiate CLL cells from normal B cells on the basis of their CD19/CD5/CD20/CD79b expression. The assay is rapid and can detect one CLL cell in 10(4) to 10(5) leukocytes in all patients. We have compared this assay to conventional assessment in 104 patients treated with CAMPATH-1H and/or autologous transplant. During CAMPATH-1H therapy, circulating CLL cells were rapidly depleted in responding patients, but remained detectable in nonresponders. Patients with more than 0.01 x 10(9)/L circulating CLL cells always had significant (> 5%) marrow disease, and blood monitoring could be used to time marrow assessments. In 25 out of 104 patients achieving complete remission by National Cancer Institute (NCI) criteria, the detection of residual bone marrow disease at more than 0.05% of leukocytes in 6 out of 25 patients predicted significantly poorer event-free (P =.0001) and overall survival (P =.007). CLL cells are detectable at a median of 15.8 months (range, 5.5-41.8) posttreatment in 9 out of 18 evaluable patients with less than 0.05% CLL cells at end of treatment. All patients with detectable disease have progressively increasing disease levels on follow-up. The use of sensitive techniques, such as the flow assay described here, allow accurate quantitation of disease levels and provide an accurate method for guiding therapy and predicting outcome. These results suggest that the eradication of detectable disease may lead to improved survival and should be tested in future studies.

  2. Two Methods to Derive Ground-level Concentrations of PM2.5 with Improved Accuracy in the North China, Calibrating MODIS AOD and CMAQ Model Predictions

    Science.gov (United States)

    Lyu, Baolei; Hu, Yongtao; Chang, Howard; Russell, Armistead; Bai, Yuqi

    2016-04-01

    Reliable and accurate characterizations of ground-level PM2.5 concentrations are essential to understand pollution sources and evaluate human exposures etc. Monitoring network could only provide direct point-level observations at limited locations. At the locations without monitors, there are generally two ways to estimate the pollution levels of PM2.5. One is observations of aerosol properties from the satellite-based remote sensing, such as Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD). The other one is from deterministic atmospheric chemistry models, such as the Community Multi-Scale Air Quality Model (CMAQ). In this study, we used a statistical spatio-temporal downscaler to calibrate the two datasets to monitor observations to derive fine-scale ground-level concentrations of PM2.5 with improved accuracy. We treated both MODIS AOD and CMAQ model predictions as biased proxy estimations of PM2.5 pollution levels. The downscaler proposed a Bayesian framework to model the spatially and temporally varying coefficients of the two types of estimations in the linear regression setting, in order to correct biases. Especially for calibrating MODIS AOD, a city-specific linear model was established to fill the missing AOD values, and a novel interpolation-based variable, i.e. PM2.5 Spatial Interpolator, was introduced to account for the spatial dependence among grid cells. We selected the heavy polluted and populated North China as our study area, in a grid setting of 81×81 12-km cells. For the evaluation of calibration performance for retrieved MODIS AOD, the R2 was 0.61 by the full model with PM2.5 Spatial Interpolator being presented, and was 0.48 with PM2.5 Spatial Interpolator not being presented. The constructed AOD values effectively predicted PM2.5 concentrations under our model structure, with R2=0.78. For the evaluation of calibrated CMAQ predictions, the R2 was 0.51, a little less than that of calibrated AOD. Finally we

  3. Dynamic Performance Comparison of Two Kalman Filters for Rate Signal Direct Modeling and Differencing Modeling for Combining a MEMS Gyroscope Array to Improve Accuracy.

    Science.gov (United States)

    Yuan, Guangmin; Yuan, Weizheng; Xue, Liang; Xie, Jianbing; Chang, Honglong

    2015-10-30

    In this paper, the performance of two Kalman filter (KF) schemes based on the direct estimated model and differencing estimated model for input rate signal was thoroughly analyzed and compared for combining measurements of a sensor array to improve the accuracy of microelectromechanical system (MEMS) gyroscopes. The principles for noise reduction were presented and KF algorithms were designed to obtain the optimal rate signal estimates. The input rate signal in the direct estimated KF model was modeled with a random walk process and treated as the estimated system state. In the differencing estimated KF model, a differencing operation was established between outputs of the gyroscope array, and then the optimal estimation of input rate signal was achieved by compensating for the estimations of bias drifts for the component gyroscopes. Finally, dynamic simulations and experiments with a six-gyroscope array were implemented to compare the dynamic performance of the two KF models. The 1σ error of the gyroscopes was reduced from 1.4558°/s to 0.1203°/s by the direct estimated KF model in a constant rate test and to 0.5974°/s by the differencing estimated KF model. The estimated rate signal filtered by both models could reflect the amplitude variation of the input signal in the swing rate test and displayed a reduction factor of about three for the 1σ noise. Results illustrate that the performance of the direct estimated KF model is much higher than that of the differencing estimated KF model, with a constant input signal or lower dynamic variation. A similarity in the two KFs' performance is observed if the input signal has a high dynamic variation.

  4. Dynamic Performance Comparison of Two Kalman Filters for Rate Signal Direct Modeling and Differencing Modeling for Combining a MEMS Gyroscope Array to Improve Accuracy

    Directory of Open Access Journals (Sweden)

    Guangmin Yuan

    2015-10-01

    Full Text Available In this paper, the performance of two Kalman filter (KF schemes based on the direct estimated model and differencing estimated model for input rate signal was thoroughly analyzed and compared for combining measurements of a sensor array to improve the accuracy of microelectromechanical system (MEMS gyroscopes. The principles for noise reduction were presented and KF algorithms were designed to obtain the optimal rate signal estimates. The input rate signal in the direct estimated KF model was modeled with a random walk process and treated as the estimated system state. In the differencing estimated KF model, a differencing operation was established between outputs of the gyroscope array, and then the optimal estimation of input rate signal was achieved by compensating for the estimations of bias drifts for the component gyroscopes. Finally, dynamic simulations and experiments with a six-gyroscope array were implemented to compare the dynamic performance of the two KF models. The 1σ error of the gyroscopes was reduced from 1.4558°/s to 0.1203°/s by the direct estimated KF model in a constant rate test and to 0.5974°/s by the differencing estimated KF model. The estimated rate signal filtered by both models could reflect the amplitude variation of the input signal in the swing rate test and displayed a reduction factor of about three for the 1σ noise. Results illustrate that the performance of the direct estimated KF model is much higher than that of the differencing estimated KF model, with a constant input signal or lower dynamic variation. A similarity in the two KFs’ performance is observed if the input signal has a high dynamic variation.

  5. Improving the diagnostic accuracy of acute myocardial infarction with the use of high-sensitive cardiac troponin T in different chronic kidney disease stages

    Science.gov (United States)

    Yang, Hongliu; Liu, Jing; Luo, Han; Zeng, Xiaoxi; Tang, Xi; Ma, Liang; Mai, Hongxia; Gou, Shenju; Liu, Fang; Fu, Ping

    2017-01-01

    High-sensitive cardiac troponin T (hs-TnT) is a critical biomarker in diagnosis of acute myocardial infarction (AMI). However, CKD individuals usually have elevated hs-TnT even in the absence of AMI. Our study aimed to explore the optimal cutoff-value of hs-TnT and further to improve diagnostic accuracy of AMI in CKD patients. Clinical data of 489 patients were collected from the maintained database between September 2010 and June 2014. CKD patients with AMI were assigned to CKD+AMI group and CKD patients without AMI were assigned to CKD group. Receiver operating characteristic curves were utilized to derive the optimal cutoff-value. In CKD+STEMI and CKD group, hs-TnT was increased with descending eGFR. In CKD+NSTEMI group, hs-TnT showed an upward trend with increasing SYNTAX Score. In patients with CKD+STEMI, hs-TnT was significantly correlated with SYNTAX Score in CKD stage 2, stage 4 and in total. In CKD patients, the optimal cutoff-value of hs-TnT for diagnosis of AMI was 129.45 ng/l with 75.2% sensitivity and 83.2% specificity. The cutoff-value appeared to be hs-TnT level of 99.55ng/l in CKD stage 3, 129.45 ng/l in CKD stage 4, 105.50 ng/l in CKD stage 5 and 149.35 ng/l in dialysis patients, respectively. In different stages of CKD, eGFR-range-specific optimal cutoff-values should be considered. PMID:28145489

  6. Mapping and improving frequency, accuracy, and interpretation of land cover change: Classifying coastal Louisiana with 1990, 1993, 1996, and 1999 Landsat Thematic Mapper image data

    Science.gov (United States)

    Nelson, G.; Ramsey, Elijah W.; Rangoonwala, A.

    2005-01-01

    Landsat Thematic Mapper images and collateral data sources were used to classify the land cover of the Mermentau River Basin within the chenier coastal plain and the adjacent uplands of Louisiana, USA. Landcover classes followed that of the National Oceanic and Atmospheric Administration's Coastal Change Analysis Program; however, classification methods needed to be developed to meet these national standards. Our first classification was limited to the Mermentau River Basin (MRB) in southcentral Louisiana, and the years of 1990, 1993, and 1996. To overcome problems due to class spectral inseparable, spatial and spectra continuums, mixed landcovers, and abnormal transitions, we separated the coastal area into regions of commonality and applying masks to specific land mixtures. Over the three years and 14 landcover classes (aggregating the cultivated land and grassland, and water and floating vegetation classes), overall accuracies ranged from 82% to 90%. To enhance landcover change interpretation, three indicators were introduced as Location Stability, Residence stability, and Turnover. Implementing methods substantiated in the multiple date MRB classification, we spatially extended the classification to the entire Louisiana coast and temporally extended the original 1990, 1993, 1996 classifications to 1999 (Figure 1). We also advanced the operational functionality of the classification and increased the credibility of change detection results. Increased operational functionality that resulted in diminished user input was for the most part gained by implementing a classification logic based on forbidden transitions. The logic detected and corrected misclassifications and mostly alleviated the necessity of subregion separation prior to the classification. The new methods provided an improved ability for more timely detection and response to landcover impact. ?? 2005 IEEE.

  7. Towards improvement of aluminium assay in quartz for in situ cosmogenic {sup 26}Al analysis at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Fujioka, Toshiyuki, E-mail: tyf@ansto.gov.au; Fink, David; Mifsud, Charles

    2015-10-15

    Accuracy and precision in the measurement of natural aluminium abundances in quartz can affect the reliability of {sup 26}Al exposure dating and {sup 26}Al/{sup 10}Be burial dating. At ANSTO, aliquots extracted from the HF solutions of dissolved quartz are treated in our laboratory, whereas ICP-OES analysis is performed at a commercial laboratory. The long-term inter-run reproducibility of our in-house standards show a limiting precision in Al measurements of 3–4% (1σ), which is lower than the claimed precision of Al analysis by ICP-OES. This indicates that unaccounted random errors are incorporated during our aliquot preparation. In this study, we performed several controlled tests to investigate effects of possible inconsistencies and variances during our aliquot preparation procedure. The results indicate that our procedure is robust against any subtle change in the preparation procedure, e.g., fuming temperatures, fuming reagents, and drying conditions. We found that the density of the solutions dispatched for ICP analysis is occasionally variable due to the presence of residual fuming reagents in the solution. A comparison of the results between the calibration curve and standard addition methods show that the former results are consistently lower than the latter by up to ∼14%. Similar offsets have been reported by previous studies. The reason for these discrepancies is mostly likely matrix effect, which is not accounted for by the calibration curve method. Further tests by varying matrix with impurities such as HF, HClO{sub 4}, H{sub 2}SO{sub 4} and Si identified that Si could cause lower offset in Al measurements; however, our ICP solutions are confirmed to be free from Si and the cause of matrix effect remains to be investigated. Hence, care must be taken for the measurement of Al concentrations in quartz by ICP-OES, either by ensuring that matrix effect is fully accounted for or by routinely employing standard additions when required.

  8. Non-Destructive Assay (NDA) Uncertainties Impact on Physical Inventory Difference (ID) and Material Balance Determination: Sources of Error, Precision/Accuracy, and ID/Propagation of Error (POV)

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-10

    These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.

  9. Test Expectancy Affects Metacomprehension Accuracy

    Science.gov (United States)

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  10. Improved dose calculation accuracy for low energy brachytherapy by optimizing dual energy CT imaging protocols for noise reduction using sinogram affirmed iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Landry, Guillaume [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); Munich Univ. (Germany). Dept. of Medical Physics; Gaudreault, Mathieu [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); Laval Univ., QC (Canada). Dept. de Radio-Oncologie et Centre de Recherche en Cancerologie; Laval Univ., QC (Canada). Dept. de Physique, de Genie Physique et d' Optique; Elmpt, Wouter van [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); Wildberger, Joachim E. [Maastricht University Medical Center (Netherlands). Dept. of Radiology; Verhaegen, Frank [Maastricht University Medical Center (Netherlands). Dept. of Radiation Oncology (MAASTRO); McGill Univ. Montreal, QC (Canada). Dept. of Oncology

    2016-05-01

    The goal of this study was to evaluate the noise reduction achievable from dual energy computed tomography (CT) imaging (DECT) using filtered backprojection (FBP) and iterative image reconstruction algorithms combined with increased imaging exposure. We evaluated the data in the context of imaging for brachytherapy dose calculation, where accurate quantification of electron density ρ{sub e} and effective atomic number Z{sub eff} is beneficial. A dual source CT scanner was used to scan a phantom containing tissue mimicking inserts. DECT scans were acquired at 80 kVp/140Sn kVp (where Sn stands for tin filtration) and 100 kVp/140Sn kVp, using the same values of the CT dose index CTDI{sub vol} for both settings as a measure for the radiation imaging exposure. Four CTDI{sub vol} levels were investigated. Images were reconstructed using FBP and sinogram affirmed iterative reconstruction (SAFIRE) with strength 1,3 and 5. From DECT scans two material quantities were derived, Z{sub eff} and ρ{sub e}. DECT images were used to assign material types and the amount of improperly assigned voxels was quantified for each protocol. The dosimetric impact of improperly assigned voxels was evaluated with Geant4 Monte Carlo (MC) dose calculations for an {sup 125}I source in numerical phantoms. Standard deviations for Z{sub eff} and ρ{sub e} were reduced up to a factor ∝2 when using SAFIRE with strength 5 compared to FBP. Standard deviations on Z{sub eff} and ρ{sub e} as low as 0.15 and 0.006 were achieved for the muscle insert representing typical soft tissue using a CTDI{sub vol} of 40 mGy and 3 mm slice thickness. Dose calculation accuracy was generally improved when using SAFIRE. Mean (maximum absolute) dose errors of up to 1.3% (21%) with FBP were reduced to less than 1% (6%) with SAFIRE at a CTDI{sub vol} of 10 mGy. Using a CTDI{sub vol} of 40mGy and SAFIRE yielded mean dose calculation errors of the order of 0.6% which was the MC dose calculation precision in this study and

  11. Is furosemide administration effective in improving the accuracy of determination of differential renal function by means of technetium-99m DMSA in patients with hydronephrosis

    Energy Technology Data Exchange (ETDEWEB)

    Kabasakal, Levent; Turkmen, Cuneyt; Ozmen, Ozlem; Alan, Nalan; Onsel, Cetin; Uslu, Ilhami [Department of Nuclear Medicine, Cerrahpasa Medical Faculty, Aksaray Istanbul, 34303 (Turkey)

    2002-11-01

    activity during the second phase of the study) (P>0.1). In conclusion, we did not observe interference from pelvicalyceal activity in patients with documented pelvic retention and infer that diuretic administration may be a useless intervention for improving the accuracy of determination of DRF. (orig.)

  12. Microfluidic assay without blocking for rapid HIV screening and confirmation.

    Science.gov (United States)

    Song, Lusheng; Zhang, Yi; Wang, Wenjun; Ma, Liying; Liu, Yong; Hao, Yanlin; Shao, Yiming; Zhang, Wei; Jiang, Xingyu

    2012-08-01

    The essential step for HIV spreading limitation is the screening tests. However, there are multiple disadvantages in current screening assays which need further confirmation test. Herein we developed a rapid HIV assay combining screening and confirmation test by using the microfluidic network assay. Meanwhile, the assay is accelerated by bypassing the step of blocking. We call this method as microfluidic assay without blocking (MAWB). Both the limit of detection and reagent incubation time of MAWB are determined by screening of one model protein pair: ovalbumin and its antibody. The assay time is accelerated about 25% while the limit of detection (LOD) is well kept. Formatting the method in for both HIV screening (testing 8 HIV-related samples) and confirmation (assaying 6 kinds of HIV antibodies of each sample) within 30 min was successful. Fast HIV screening and confirmation of 20 plasma samples were also demonstrated by this method. MAWB improved the assay speed while keeping the LOD of conventional ELISA. Meanwhile, both the accuracy and throughput of MAWB were well improved, which made it an excellent candidate for a quick HIV test for both screening and confirmation. Methods like this one will find wide applications in clinical diagnosis and biochemical analysis based on the interactions between pairs of molecules.

  13. Improving accuracy of Tay Sachs carrier screening of the non-Jewish population: analysis of 34 carriers and six late-onset patients with HEXA enzyme and DNA sequence analysis.

    Science.gov (United States)

    Park, Noh Jin; Morgan, Craig; Sharma, Rajesh; Li, Yuanyin; Lobo, Raynah M; Redman, Joy B; Salazar, Denise; Sun, Weimin; Neidich, Julie A; Strom, Charles M

    2010-02-01

    The purpose of this study was to determine whether combining different testing modalities namely beta-hexosaminidase A (HEXA) enzyme analysis, HEXA DNA common mutation assay, and HEXA gene sequencing could improve the sensitivity for carrier detection in non-Ashkenazi (AJ) individuals. We performed a HEXA gene sequencing assay, a HEXA DNA common mutation assay, and a HEXA enzyme assay on 34 self-reported Tay-Sachs disease (TSD) carriers, six late-onset patients with TSD, and one pseudodeficiency allele carrier. Sensitivity of TSD carrier detection was 91% for gene sequencing compared with 91% for the enzyme assay and 52% for the DNA mutation assay. Gene sequencing combined with enzyme testing had the highest sensitivity (100%) for carrier detection. Gene sequencing detected four novel mutations, three of which are predicted to be disease causing [118.delT, 965A-->T (D322V), and 775A-->G (T259A)]. Gene sequencing is useful in identifying rare mutations in patients with TSD and their families, in evaluating spouses of known carriers for TSD who have indeterminate enzyme analysis and negative for common mutation analysis, and in resolving ambiguous enzyme testing results.

  14. Improving immunogenicity, efficacy and safety of vaccines through innovation in clinical assay development and trial design: the Phacilitate Vaccine Forum, Washington D.C. 2011.

    Science.gov (United States)

    Moldovan, Ioana R; Tary-Lehmann, Magdalena

    2011-06-01

    The 9th Annual Vaccine Forum organized by Phacilitate in Washington D.C. 2011 brought together 50+ senior level speakers and over 400 participants representing all the key stakeholders concerning vaccines. The main focus of the meeting was to define priorities in the global vaccines sector from funding to manufacturing and evaluation of vaccine efficacy. A special session was devoted to improving immunogenicity, efficacy and safety of vaccines through innovation in clinical assay development and trial design. The current regulatory approach to clinical assay specification, validation and standardization that enable more direct comparisons of efficacy between trials was illustrated by the success in meningococcal vaccine development. The industry approach to validation strategies was exemplified by a new serologic test used on the diagnostic of pneumococcal pneumonia. The application of the Animal Rule to bridge clinical and non-clinical studies in botulism has allowed significant progress in developing one of the first vaccines to seek approval under the FDA Animal Efficacy Rule. An example of pushing the boundaries in the correlation of immunological responses and efficacy points was represented by a recent cell-based influenza vaccine for which the same correlates of protection apply as for the traditional, egg-based flue vaccine. In the field of HIV phase 2b studies are underway, based on promising results obtained with some vaccine candidates. The conclusion of this session was that creativity in vaccine design and evaluation is beneficial and can lead to innovative new vaccine designs as well as to validated assays to assess vaccine efficacy.

  15. Improved performance of Brucella melitensis native hapten over Brucella abortus OPS tracer on goat antibody detection by the fluorescence polarization assay.

    Science.gov (United States)

    Ramírez-Pfeiffer, C; Díaz-Aparicio, E; Rodríguez-Padilla, C; Morales-Loredo, A; Alvarez-Ojeda, G; Gomez-Flores, R

    2008-06-15

    The current method for goat brucellosis diagnosis is based on the World Organization for Animal Health (OIE) using the screening card test (CT), with antigen at 8% (CT8) or 3% (CT3) of cell concentrations, and the confirmatory complement fixation test (CFT). However, these tests do not differentiate antibodies induced by vaccination from those derived from field infections by Brucella species or other bacterial agents; in places like Mexico, where the prevalence of brucellosis and the vaccination rates are high, there is a considerable percentage of false positive reactions that causes significant unnecessary slaughter of animals. Furthermore, results of the fluorescence polarization assay (FPA) using the Brucella abortus O-polysaccharide (OPS) tracer in goats are poorer than those with cattle. The present study was undertaken to investigate a tracer prepared from the native hapten (NH) of the Rev. 1 strain of Brucella melitensis to improve FPA performance on goat brucellosis diagnosis. Evaluation of 48 positive samples and 96 negative samples showed that the NH tracer was more accurate (pgoat sera samples selected by test series approved by the OIE (card test 3% and CFT). We demonstrated a new application for the NH lipopolysaccharide on detecting antibodies against Brucella using the FPA, which may yield faster results (minutes vs. 24-72h) than the immunodiagnosis assays frequently used in bovine brucellosis. In addition, NH tracer produces similar or better performance results than the conventional OPS tracer, using the FPA in goat sera samples.

  16. The APTIMA HPV assay versus the Hybrid Capture 2 test in triage of women with ASC-US or LSIL cervical cytology: a meta-analysis of the diagnostic accuracy.

    Science.gov (United States)

    Arbyn, Marc; Roelens, Jolien; Cuschieri, Kate; Cuzick, Jack; Szarewski, Ann; Ratnam, Sam; Reuschenbach, Miriam; Belinson, Suzanne; Belinson, Jerome L; Monsonego, Joseph

    2013-01-01

    Testing for DNA of 13 high-risk HPV types with the Hybrid Capture 2 (HC2) test has consistently been shown to perform better in triage of women with cervical cytology results showing atypical squamous cells of undetermined significance (ASC-US) but often not in triage of low-grade squamous intraepithelial lesions (LSIL) detected in cervical cancer screening. In a meta-analysis, we compared the accuracy of the APTIMA HPV test, which identifies RNA of 14 high-risk HPV types, to HC2 for the triage of women with ASC-US or LSIL. Literature search-targeted studies where the accuracy of APTIMA HPV and HC2 for detection of underlying CIN2/3+ was assessed concomitantly including verification of all cases of ASC-US and LSIL. HSROC (Hierarchical Summary ROC) curve regression was used to compute the pooled absolute and relative sensitivity and specificity. Eight studies, comprising 1,839 ASC-US and 1,887 LSIL cases, were retrieved. The pooled sensitivity and specificity of APTIMA to triage ASC-US to detect underlying CIN3 or worse was 96.2% (95% CI = 91.7-98.3%) and 54.9% (95% CI = 43.5-65.9%), respectively. APTIMA and HC2 showed similar pooled sensitivity; however, the specificity of the former was significantly higher (ratio: 1.19; 95% CI = 1.08-1.31 for CIN2+). The pooled sensitivity and specificity of APTIMA to triage LSIL were 96.7% (95% CI = 91.4-98.9%) and 38.7% (95% CI = 30.5-47.6%) for CIN3+. APTIMA was as sensitive as HC2 but more specific (ratio: 1.35; 95% CI = 1.11-1.66). Results were similar for detection of CIN2 or worse. In both triage of ASC-US and LSIL, APTIMA is as sensitive but more specific than HC2 for detecting cervical precancer.

  17. Increased throwing accuracy improves children’s catching performance in a ball-catching task from the movement assessment battery (MABC-2

    Directory of Open Access Journals (Sweden)

    Tim Dirksen

    2016-07-01

    Full Text Available The Movement Assessment Battery for Children (MABC-2 is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely de-pends on the catching skills or also to some extent on the throwing skills, the M-ABC2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine 1 to what extent the throwing accuracy has an effect on the children's catching performance and 2 to what extent the throwing accuracy influences their choice of catching strategy.In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strat-egy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball’s parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the re-quirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the M-ABC2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of

  18. An Automated Micro-Total Immunoassay System for Measuring Cancer-Associated α2,3-linked Sialyl N-Glycan-Carrying Prostate-Specific Antigen May Improve the Accuracy of Prostate Cancer Diagnosis

    Science.gov (United States)

    Ishikawa, Tomokazu; Yoneyama, Tohru; Tobisawa, Yuki; Hatakeyama, Shingo; Kurosawa, Tatsuo; Nakamura, Kenji; Narita, Shintaro; Mitsuzuka, Koji; Duivenvoorden, Wilhelmina; Pinthus, Jehonathan H.; Hashimoto, Yasuhiro; Koie, Takuya; Habuchi, Tomonori; Arai, Yoichi; Ohyama, Chikara

    2017-01-01

    The low specificity of the prostate-specific antigen (PSA) for early detection of prostate cancer (PCa) is a major issue worldwide. The aim of this study to examine whether the serum PCa-associated α2,3-linked sialyl N-glycan-carrying PSA (S2,3PSA) ratio measured by automated micro-total immunoassay systems (μTAS system) can be applied as a diagnostic marker of PCa. The μTAS system can utilize affinity-based separation involving noncovalent interaction between the immunocomplex of S2,3PSA and Maackia amurensis lectin to simultaneously determine concentrations of free PSA and S2,3PSA. To validate quantitative performance, both recombinant S2,3PSA and benign-associated α2,6-linked sialyl N-glycan-carrying PSA (S2,6PSA) purified from culture supernatant of PSA cDNA transiently-transfected Chinese hamster ovary (CHO)-K1 cells were used as standard protein. Between 2007 and 2016, fifty patients with biopsy-proven PCa were pair-matched for age and PSA levels, with the same number of benign prostatic hyperplasia (BPH) patients used to validate the diagnostic performance of serum S2,3PSA ratio. A recombinant S2,3PSA- and S2,6PSA-spiked sample was clearly discriminated by μTAS system. Limit of detection of S2,3PSA was 0.05 ng/mL and coefficient variation was less than 3.1%. The area under the curve (AUC) for detection of PCa for the S2,3PSA ratio (%S2,3PSA) with cutoff value 43.85% (AUC; 0.8340) was much superior to total PSA (AUC; 0.5062) using validation sample set. Although the present results are preliminary, the newly developed μTAS platform for measuring %S2,3PSA can achieve the required assay performance specifications for use in the practical and clinical setting and may improve the accuracy of PCa diagnosis. Additional validation studies are warranted. PMID:28241428

  19. Enzyme assays.

    Science.gov (United States)

    Reymond, Jean-Louis; Fluxà, Viviana S; Maillard, Noélie

    2009-01-07

    Enzyme assays are analytical tools to visualize enzyme activities. In recent years a large variety of enzyme assays have been developed to assist the discovery and optimization of industrial enzymes, in particular for "white biotechnology" where selective enzymes are used with great success for economically viable, mild and environmentally benign production processes. The present article highlights the aspects of fluorogenic and chromogenic substrates, sensors, and enzyme fingerprinting, which are our particular areas of interest.

  20. Improving the accuracy of derivation of the Williams’ series parameters under mixed (I+II) mode loading by compensation of measurement bias in the stress field components data

    Science.gov (United States)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-12-01

    A new method for compensation of bias in the stress field components measurement data used for Williams’ series parameters derivation was presented. Essential increase of accuracy of derivation of SIF-related leading terms in series under mixed (I+II) mode loading was demonstrated. It was shown that a relatively low value of bias in the stress field components data error could result in the essential deviation of the values of derived Williams’ coefficients and the crack tip coordinates.

  1. Improving the accuracy of mass-lumped finite-elements in the first-order formulation of the wave equation by defect correction

    Science.gov (United States)

    Shamasundar, R.; Mulder, W. A.

    2016-10-01

    Finite-element discretizations of the acoustic wave equation in the time domain often employ mass lumping to avoid the cost of inverting a large sparse mass matrix. For the second-order formulation of the wave equation, mass lumping on Legendre-Gauss-Lobatto points does not harm the accuracy. Here, we consider a first-order formulation of the wave equation. In that case, the numerical dispersion for odd-degree polynomials exhibits super-convergence with a consistent mass matrix but mass lumping destroys that property. We consider defect correction as a means to restore the accuracy, in which the consistent mass matrix is approximately inverted using the lumped one as preconditioner. For the lowest-degree element on a uniform mesh, fourth-order accuracy in 1D can