WorldWideScience

Sample records for accuracy mt method

  1. Three-dimensional modeling in the electromagnetic/magnetotelluric methods. Accuracy of various finite-element and finite difference methods; Denjiho MT ho ni okeru sanjigen modeling. Shushu no yugen yosoho to sabunho no seido

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, Y. [Kyushu University, Fukuoka (Japan). Faculty of Engineering

    1997-05-27

    To enhance the reliability of electromagnetic/magnetotelluric (MT) survey, calculation results of finite-element methods (FEMs) and finite difference methods (FDMs) were compared. Accuracy of individual methods and convergence of repitition solution were examined. As a result of the investigation, it was found that appropriate accuracy can be obtained from the edge FEM and FDM for the example of vertical magnetic dipole, and that the best accuracy can be obtained from the FDM among four methods for the example of MT survey. It was revealed that the ICBCG (incomplete Cholesky bi-conjugate gradient) method is an excellent method as a solution method of simultaneous equations from the viewpoint of accuracy and calculation time. For the joint FEM, solutions of SOR method converged for both the examples. It was concluded that the cause of error is not due to the error of numerical calculation, but due to the consideration without discontinuity of electric field. The conditions of coefficient matrix increased with decreasing the frequency, which resulted in the unstable numerical calculation. It would be required to incorporate the constraint in a certain form. 4 refs., 12 figs.

  2. Application of remote sensing analysis and MT method for identification geothermal prospect zone in Mt. Endut

    Science.gov (United States)

    Akbar, A. M.; Permadi, A. N.; Wildan, D.; Sobirin, R.; Supriyanto

    2017-07-01

    Mount Endut is located at Banten Province, 40 km southward Rangkasbitung City, with geographic UTM position between 9261000-9274000 N and 639000-652000 E. Preliminary survey at Mt. Endut was geological and geochemical survey in 2006, resistivity survey and MT survey in 2007 with 27 measurement point. All survey conducted by Pusat Sumber Daya Geologi (PSDG). According to result of premilinary survey, Mt. Endut is dominated by quartenary volcanic rock produced by Mt. Endut, which breakthrough tertiary sediment layer. NE to SW normal fault produced surface manifestation, namely Cikawah (CKW) hot spring and Handeleum (HDL) hot spring. According to SiO2 and NaK geothermometer, subsurface temperature of Mt Endut is ranging from 162 to 180 °C. Apparent resistivity maps show that thermal manifestation areas coincide with pronounced high anomaly due to resistive intrusion bodies contrast to conductive sedimentary basements. In order to delineate permeability zone, fracture fault density (FFD) analysis from remote sensing image is carry out. FFD analysis from lansdat 7 image shows the area on westward flank of Mt. Endut have high fracture fault density (162-276 m/km2), higher than it's surrounding area and can be assume that area is weak zone and have high permeability. That's structure density anomaly coincide with low resistivity from Magnetotelluric data. Resistivity structure from Magnetotelluric data shows western flank have low permeability layer (14-27 Ohmm) with average thickness 250 m. Below this layer there is layer with higher resistivity (37-100 Ohmm) with ±1000 m depth and interpreted as shallow reservoir. Massive resistif intrusive bodies act controlled the surface manifestation, and act as boundary and bounded the geothermal system in western part of Mt. Endut.

  3. An improved method with a wider applicability to isolate plant mitochondria for mtDNA extraction

    OpenAIRE

    2015-01-01

    Background Mitochondria perform a principal role in eukaryotic cells. Mutations in mtDNA can cause mitochondrial dysfunction and are frequently associated with various abnormalities during plant development. Extraction of plant mitochondria and mtDNA is the basic requirement for the characterization of mtDNA mutations and other molecular studies. However, currently available methods for mitochondria isolation are either tissue specific or species specific. Extracted mtDNA may contain substant...

  4. Simplified qPCR method for detecting excessive mtDNA damage induced by exogenous factors.

    Science.gov (United States)

    Gureev, Artem P; Shaforostova, Ekaterina A; Starkov, Anatoly A; Popov, Vasily N

    2017-05-01

    Damage to mitochondrial DNA (mtDNA) is a meaningful biomarker for evaluating genotoxicity of drugs and environmental toxins. Existing PCR methods utilize long mtDNA fragments (∼8-10kb), which complicates detecting exact sites of mtDNA damage. To identify the mtDNA regions most susceptible to damage, we have developed and validated a set of primers to amplify ∼2kb long fragments, while covering over 95% of mouse mtDNA. We have modified the detection method by greatly increasing the enrichment of mtDNA, which allows us solving the problem of non-specific primer annealing to nuclear DNA. To validate our approach, we have determined the most damage-susceptible mtDNA regions in mice treated in vivo and in vitro with rotenone and H2O2. The GTGR-sequence-enriched mtDNA segments located in the D-loop region were found to be especially susceptible to damage. Further, we demonstrate that H2O2-induced mtDNA damage facilitates the relaxation of mtDNA supercoiled conformation, making the sequences with minimal damage more accessible to DNA polymerase, which, in turn, results in a decrease in threshold cycle value. Overall, our modified PCR method is simpler and more selective to the specific sites of damage in mtDNA. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  6. A new method of migration imaging for MT data

    Institute of Scientific and Technical Information of China (English)

    宋维琪; 邹文勇

    2005-01-01

    Based on the study on electromagnetic field migration by Zhdanov, we have proposed an improved method for the weak points in the research. Firstly, the initial background resistivity should be determined by using 1-D inversion results. Then in the process of continuation, the results are corrected and calculated layer by layer by the iteration method, so that more exact resistivity can be obtained. Secondly, an improved algorithm for finite-difference equation is studied. According to the property of electromagnetic migration field, the algorithm is designed by means of grids varying with geometric progression in the longitudinal direction. Being improved by the techniques mentioned above, better results are obtained by the new method, which has been verified by both the theory model and practical data.

  7. A fast RCS accuracy assessment method for passive radar calibrators

    Science.gov (United States)

    Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI

    2016-10-01

    In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.

  8. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Directory of Open Access Journals (Sweden)

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  9. 常见嗜尸性昆虫mtDNA提取方法的比较%Comparison of mtDNA Extracting Methods for Common Sarcosaphagous Insects

    Institute of Scientific and Technical Information of China (English)

    陈瑶清; 郭亚东; 李茂枝; 熊枫; 李剑波; 蔡继峰

    2011-01-01

    目的 比较十六烷基三甲基溴化铵(cetyl triethyl ammonium bromide,CTAB)法、十二烷基硫酸钠-醋酸钾(sodium dodecyl sulfate-potassium acetate,SDS-KAc)法和十二烷基硫酸钠-蛋白酶K(sodium dodecyl sulfate-protein K,SDS-PK)法对法医学常见嗜尸性昆虫mtDNA的提取效果.方法 随机采集放置在长沙地区室外草地家兔尸体上的大头金蝇、双色葬甲、金龟科、墨胸胡蜂4种72只常见嗜尸性昆虫,分别采用CTAB、SDS-KAc、SDS-PK 3种方法提取总DNA.核酸蛋白测定仪检测DNA纯度及浓度,用mtDNA特异性引物进行PCR扩增,琼脂糖凝胶电泳检测PCR产物,对PCR产物进行序列测定,将测序结果上传到GenBank.结果 3种方法均能成功提取4种嗜尸性昆虫的mtDNA,SDS-PK法提取效果最好,CTAB法对陈旧性样本提取效果优于另两种方法,SDS-KAc法对各类样本的提取效果相近.结论 实验中应根据不同情况,选择最恰当的提取方法.制备高质量DNA推荐使用SDS-PK法,陈旧性样本推荐使用CTAB法,在各类预实验中可采取低成本的SDS-KAc法.%Objective To compare effects of three different methods for mtDNA extraction from common sarcosaphagous insects including cetyl trimethyl ammonium bromide (CTAB) method, sodium dodecyl sul-fate-potassium acetate (SDS-Kac) method and sodium dodecyl sulfate-proteinase K (SDS-PK) method. Methods Seventy-two insects from four species [Chrysomya megacephala (Fabricius, 1784), Eusilpha bi-color (Fairmaire, 1896), Paraeutrichopus pecoudi (Mateu, 1954), Vespa velutina (Lepeletier, 1836)] were collected from the corpses of the rabbits in Changsha district. The total DNA of above samples was extracted by CTAB, SDS-Kac and SDS-PK methods. The purity and concentration of DNA were examined by protein-nucleic acid spectrophotometry, and mtDNA were amplified by specific primers and PCR products were detected by agarose gel electrophoresis. Then PCR products were sequenced and subsequently uploaded to

  10. Modeling the 3D Terrain Effect on MT by the Boundary Element Method

    Institute of Scientific and Technical Information of China (English)

    Ruan Baiyao; Xu Shizhe; Xu Zhifeng

    2006-01-01

    A numerical method is put forward in this paper, using the boundary element method(BEM) to model 3D terrain effects on magnetotelluric (MT) surveys. Using vector integral theory and electromagnetic field boundary conditions, the boundary problem of two electromagnetic fields in the upper half space (air) and lower half space (earth medium) was transformed into two vector integral equations just related to the topography: one magnetic equation for computing the magnetic field and the other electrical equation for computing the electrical field. The topography integral is decomposed into a series of integrals in a triangle element. For the integral in a triangle element, we suppose that the electromagnetic field in it is the stack of the electromagnetic field in the homogeneous earth and the topography response which is a constant; so the computation becomes simple, convenient and highly accurate. By decomposition and computation, each vector integral equation can be calculated by solving three linear equations that are related to the three Cartesian directions. The matrix of these linear equations is diagonally dominant and can be solved using the Symmetric Successive Over-Relaxation (SSOR) method. The apparent resistivity curve of MT on two 3D terrains calculated by BEM is shown in this paper.

  11. Method for Improving the Ranging Accuracy of Radio Fuze

    Institute of Scientific and Technical Information of China (English)

    HU Xiu-juan; DENG Jia-hao; SANG Hui-ping

    2006-01-01

    Stepped frequency radar waveform is put forward for improving the accuracy of radio fuze ranging. IFFT is adopted to synthesize one dimension high resolution range profile. Furthermore, the same range reject method and selection maximum method are made use of removing target redundancy and the simulation results are given. Characters of the two methods are analyzed, and under the proposal of Weibull distribution clutter envelope, the CFAR same range selection maximum method is adopted and realizes the accurate profile and ranging.

  12. Diagnostic accuracy and safety of semirigid thoracoscopy in exudative pleural effusions in Denmark

    DEFF Research Database (Denmark)

    Willendrup, Fatin; Bødtger, Uffe; Colella, Sara

    2014-01-01

    BACKGROUND: To assess the diagnostic accuracy and the safety of medical thoracoscopy (MT) performed with the semirigid thoracoscope. METHODS: We retrospectively evaluated patients who underwent MT with semirigid thoracoscope under local anesthesia for unexplained exudative pleural effusion from...

  13. The 2016 Seismic sequence in central Italy: a multi-method approach to constrain the geometry of the Mt. Vettore - Mt. Bove fault system

    Science.gov (United States)

    Luiso, Paola; Paoletti, Valeria; Gaudiosi, Germana; Nappi, Rosa; Cella, Federico; Fedi, Maurizio

    2017-04-01

    Since August 24, 2016 a destructive seismic sequence has been occurring in Central Italy, between the Amatrice and Norcia towns. The seismic sequence started with the event of Mw 6.0 that was followed one hour later by the Mw 5.4 earthquake and by thousands of aftershocks along the NW-SE fault system extended for about 30 km. On October 26 the Mw=5.9 seismic event struck the area, followed by the strong Mw 6.5 earthquake on the October 30, at a depth of 9 km with epicenter located between the Norcia and Visso towns. The three months of seismicity activated the nearby 60 km long normal fault system of Mt. Vettore - Mt. Porche - Mt. Bove. The area was struck by several moderate to large earthquakes in historical times. In detail, in the Amatrice sector we mention the earthquakes of 1627 (Io=7-8 MCS, Mw 5.3), 1639 (Io=9-10 MCS, Mw=6.2), 1672 (Io=7-8 MCS, Mw=5.3) A.D. The main historical earthquakes of Valnerina, the area closest to the epicentre of the October 30 2016 earthquake, occurred in 1328 (Io=10 MCS, Mw=6.5), 1719 (Io= 8 MCS, Mw=5.6), 1730 (Io =9 MCS, Mw=6) 1859 (Io=8-9 MCS, Mw= 5.7) A.D. It is important also to remember the complex sequence of the 1703 A.D. (January 14, Valnerina, Io=11, Mw= 6.9; February 2, Aquilano, Io 10, Mw= 6.7) that had a considerably devastating impact on the area. Nevertheless, the historical seismicity correlated with the more external fault system of the Umbria-Marche-Abruzzi Apennine ridge is characterized by absence of strong energy seismicity along the Mt. Bove - Mt. Vettore- Vettoretto sector, suggesting that the fault system was "silent" until the 2016 seismic sequence. Our study consists in a multiparametric data analysis in GIS (Geographic Information System) environment which integrates tectonic, seismic and gravimetric datasets with the aim of investigating the neotectonic activity of the area. The gravimetric dataset contains the Multiscale Derivative Analysis (MDA) data of the gravity field, in which each maximum

  14. Effect of calibration method on Tekscan sensor accuracy.

    Science.gov (United States)

    Brimacombe, Jill M; Wilson, David R; Hodgson, Antony J; Ho, Karen C T; Anglin, Carolyn

    2009-03-01

    Tekscan pressure sensors are used in biomechanics research to measure joint contact loads. While the overall accuracy of these sensors has been reported previously, the effects of different calibration algorithms on sensor accuracy have not been compared. The objectives of this validation study were to determine the most appropriate calibration method supplied in the Tekscan program software and to compare its accuracy to the accuracy obtained with two user-defined calibration protocols. We evaluated the calibration accuracies for test loads within the low range, high range, and full range of the sensor. Our experimental setup used materials representing those found in standard prosthetic joints, i.e., metal against plastic. The Tekscan power calibration was the most accurate of the algorithms provided with the system software, with an overall rms error of 2.7% of the tested sensor range, whereas the linear calibrations resulted in an overall rms error of up to 24% of the tested range. The user-defined ten-point cubic calibration was almost five times more accurate, on average, than the power calibration over the full range, with an overall rms error of 0.6% of the tested range. The user-defined three-point quadratic calibration was almost twice as accurate as the Tekscan power calibration, but was sensitive to the calibration loads used. We recommend that investigators design their own calibration curves not only to improve accuracy but also to understand the range(s) of highest error and to choose the optimal points within the expected sensing range for calibration. Since output and sensor nonlinearity depend on the experimental protocol (sensor type, interface shape and materials, sensor range in use, loading method, etc.), sensor behavior should be investigated for each different application.

  15. Comparative Study Between METEOR and BLEU Methods of MT: Arabic into English Translation as a Case Study

    Directory of Open Access Journals (Sweden)

    Laith S. Hadla

    2015-11-01

    Full Text Available The Internet provides its users with a variety of services, and these services include free online machine translators, which translate free of charge between many of the world's languages such as Arabic, English, Chinese, German, Spanish, French, Russian, etc. Machine translators facilitate the transfer of information between different languages, thus eliminating the language barrier, since the amount of information and knowledge available varies from one language to another, Arabic content on the internet, for example, accounts 1% of the total internet content, while Arabs constitute 5% of the population of the earth, which means that the intellectual productivity of the Arabs is low because within internet use Internet's Arabic content represents 20% of their natural proportion, which in turn encouraged some Arab parties to improve Arabic content within the internet. So, many of those interested specialists rely on machine translators to bridge the knowledge gap between the information available in the Arabic language and those in other living languages such as English. This empirical study aims to identify the best Arabic to English Machine translation system, in order to help the developers of these systems to enhance the effectiveness of these systems. Furthermore, such studies help the users to choose the best. This study involves the construction of a system for Automatic Machine Translation Evaluation System of the Arabic language into language. This study includes assessing the accuracy of the translation by the two known machine translators, Google Translate, and the second, which bears the name of Babylon machine translation from Arabic into English. BLEU and METEOR methods are used the MT quality, and to identify the closer method to human judgments. The authors conclude that BLEU is closer to human judgments METEOR method.

  16. Volcano Monitoring and Early Warning on MT Etna, Italy, Using Volcanic Tremor - Methods and Technical Aspects

    Science.gov (United States)

    D'Agostino, Marcello; Di Grazia, Giuseppe; Ferrari, Ferruccio; Langer, Horst; Messina, Alfio; Reitano, Danilo; Spampinato, Salvatore

    2013-04-01

    Recent activity on Mt Etna was characterized by 25 lava fountains occurred on Mt Etna in 2011 and the first semester of 2012. In summer 2012 volcanic activity in a milder form was noticed within the Bocca Nuova crater, before it came to an essential halt in August 2012. Together with previous unrests (e. g., in 2007-08) these events offer rich material for testing automatic data processing and alert issue in the context of volcano monitoring. Our presentation focuses on the seismic background radiation - volcanic tremor - which has a key role in the surveillance of Mt Etna. From 2006 on a multi-station alert system exploiting STA/LTA ratios, has been established in the INGV operative centre of Catania. Besides, also the frequency content has been found to change correspondingly to the type of volcanic activity, and can thus be exploited for warning purposes. We apply Self Organizing Maps and Fuzzy Clustering which offer an efficient way to visualize signal characteristics and its development with time. These techniques allow to identify early stages of eruptive events and automatically flag a critical status before this becomes evident in conventional monitoring techniques. Changes of tremor characteristics are related to the position of the source of the signal. Given the dense seismic network we can base the location of the sources on distribution of the amplitudes across the network. The locations proved to be extremely useful for warning throughout both a flank eruption in 2008 as well as the 2011 lava fountains. During all these episodes a clear migration of tremor sources towards the eruptive centres was revealed in advance. The location of the sources completes the picture of an imminent volcanic unrest and corroborates early warnings flagged by the changes of signal characteristics. Automatic real time data processing poses high demands on computational efficiency, robustness of the methods and stability of data acquisition. The amplitude based multi

  17. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    Directory of Open Access Journals (Sweden)

    Seoung-Hyeon Lee

    2016-01-01

    Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.

  18. Stability and Accuracy Analysis for Taylor Series Numerical Method

    Institute of Scientific and Technical Information of China (English)

    赵丽滨; 姚振汉; 王寿梅

    2004-01-01

    The Taylor series numerical method (TSNM) is a time integration method for solving problems in structural dynamics. In this paper, a detailed analysis of the stability behavior and accuracy characteristics of this method is given. It is proven by a spectral decomposition method that TSNM is conditionally stable and belongs to the category of explicit time integration methods. By a similar analysis, the characteristic indicators of time integration methods, the percentage period elongation and the amplitude decay of TSNM, are derived in a closed form. The analysis plays an important role in implementing a procedure for automatic searching and finding convergence radii of TSNM. Finally, a linear single degree of freedom undamped system is analyzed to test the properties of the method.

  19. Estimated Accuracy of Three Common Trajectory Statistical Methods

    Science.gov (United States)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  20. Improved Fast Fourier Transform Based Method for Code Accuracy Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Tae Wook; Jeong, Jae Jun [Pusan National University, Busan (Korea, Republic of); Choi, Ki Yong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The capability of the proposed method is discussed. In this study, the limitations of the FFTBM were analyzed. The FFTBM produces quantitatively different results due to its frequency dependence. Because the problem is intensified by including a lot of high frequency components, a new method using a reduced cut-off frequency was proposed. The results of the proposed method show that the shortcomings of FFTBM are considerably relieved. Among them, the fast Fourier transform based method (FFTBM) introduced in 1990 has been widely used to evaluate a code uncertainty or accuracy. Prosek et al., (2008) identified its drawbacks, the so-called 'edge effect'. To overcome the problems, an improved FFTBM by signal mirroring (FFTBM-SM) was proposed and it has been used up to now. In spite of the improvement, the FFTBM-SM yielded different accuracy depending on the frequency components of a parameter, such as pressure, temperature and mass flow rate. Therefore, it is necessary to reduce the frequency dependence of the FFTBMs. In this study, the deficiencies of the present FFTBMs are analyzed and a new method is proposed to mitigate its frequency dependence.

  1. Modified Mixed Lagrangian-Eulerian Method Based on Numerical Framework of MT3DMS on Cauchy Boundary.

    Science.gov (United States)

    Suk, Heejun

    2016-07-01

    MT3DMS, a modular three-dimensional multispecies transport model, has long been a popular model in the groundwater field for simulating solute transport in the saturated zone. However, the method of characteristics (MOC), modified MOC (MMOC), and hybrid MOC (HMOC) included in MT3DMS did not treat Cauchy boundary conditions in a straightforward or rigorous manner, from a mathematical point of view. The MOC, MMOC, and HMOC regard the Cauchy boundary as a source condition. For the source, MOC, MMOC, and HMOC calculate the Lagrangian concentration by setting it equal to the cell concentration at an old time level. However, the above calculation is an approximate method because it does not involve backward tracking in MMOC and HMOC or allow performing forward tracking at the source cell in MOC. To circumvent this problem, a new scheme is proposed that avoids direct calculation of the Lagrangian concentration on the Cauchy boundary. The proposed method combines the numerical formulations of two different schemes, the finite element method (FEM) and the Eulerian-Lagrangian method (ELM), into one global matrix equation. This study demonstrates the limitation of all MT3DMS schemes, including MOC, MMOC, HMOC, and a third-order total-variation-diminishing (TVD) scheme under Cauchy boundary conditions. By contrast, the proposed method always shows good agreement with the exact solution, regardless of the flow conditions. Finally, the successful application of the proposed method sheds light on the possible flexibility and capability of the MT3DMS to deal with the mass transport problems of all flow regimes.

  2. Photogrammetric methods applied to Svalbard glaciers: accuracies and challenges

    Directory of Open Access Journals (Sweden)

    Trond Eiken

    2012-06-01

    Full Text Available Use of digital images is expanding as a tool for glacier monitoring, and small-format time-lapse cameras are increasingly being used for glacier monitoring of fast-flowing glaciers. Stereoscopic imagery is preferable since it yields direct displacement results but stereo photogrammetry has more requirements regarding geometry in set-up and control points, as well as the additional cost of another complete camera system. We investigate a combination of methods to achieve satisfactory control of accuracy with resulting significant day-to-day velocity variations ranging from 1.5–4 m day−1 made at a distance of 2 km. Validation of results was made by comparing different methods, partly using the same image material, but also in combination with aerial and satellite images. Monoscopic results can also be used to gain continuity in a stereo data set when geometry or visibility is poor. We also explore the use of ordinary photographs taken from airliners for compilation of orthoimages as a potential low cost method for detection of sudden changes. The method, showing some tens of metres accuracy, was verified for monitoring velocities and front positions during a glacier surge and was also used to validate monoscopic time-lapse images.

  3. Method for improving accuracy in full evaporation headspace analysis.

    Science.gov (United States)

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-03-21

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. This article is protected by copyright. All rights reserved.

  4. Accuracy of multi-trait genomic selection using different methods

    Directory of Open Access Journals (Sweden)

    Veerkamp Roel F

    2011-07-01

    Full Text Available Abstract Background Genomic selection has become a very important tool in animal genetics and is rapidly emerging in plant genetics. It holds the promise to be particularly beneficial to select for traits that are difficult or expensive to measure, such as traits that are measured in one environment and selected for in another environment. The objective of this paper was to develop three models that would permit multi-trait genomic selection by combining scarcely recorded traits with genetically correlated indicator traits, and to compare their performance to single-trait models, using simulated datasets. Methods Three (SNP Single Nucleotide Polymorphism based models were used. Model G and BCπ0 assumed that contributed (covariances of all SNP are equal. Model BSSVS sampled SNP effects from a distribution with large (or small effects to model SNP that are (or not associated with a quantitative trait locus. For reasons of comparison, model A including pedigree but not SNP information was fitted as well. Results In terms of accuracies for animals without phenotypes, the models generally ranked as follows: BSSVS > BCπ0 > G > > A. Using multi-trait SNP-based models, the accuracy for juvenile animals without any phenotypes increased up to 0.10. For animals with phenotypes on an indicator trait only, accuracy increased up to 0.03 and 0.14, for genetic correlations with the evaluated trait of 0.25 and 0.75, respectively. Conclusions When the indicator trait had a genetic correlation lower than 0.5 with the trait of interest in our simulated data, the accuracy was higher if genotypes rather than phenotypes were obtained for the indicator trait. However, when genetic correlations were higher than 0.5, using an indicator trait led to higher accuracies for selection candidates. For different combinations of traits, the level of genetic correlation below which genotyping selection candidates is more effective than obtaining phenotypes for an indicator

  5. Evaluating MT systems with BEER

    Directory of Open Access Journals (Sweden)

    Stanojević Miloš

    2015-10-01

    Full Text Available We present BEER, an open source implementation of a machine translation evaluation metric. BEER is a metric trained for high correlation with human ranking by using learning-to-rank training methods. For evaluation of lexical accuracy it uses sub-word units (character n-grams while for measuring word order it uses hierarchical representations based on PETs (permutation trees. During the last WMT metrics tasks, BEER has shown high correlation with human judgments both on the sentence and the corpus levels. In this paper we will show how BEER can be used for (i full evaluation of MT output, (ii isolated evaluation of word order and (iii tuning MT systems.

  6. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    Science.gov (United States)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients

  7. Novel method for high accuracy figure measurement of optical flat

    Science.gov (United States)

    E, Kewei; Li, Dahai; Yang, Lijie; Guo, Guangrao; Li, Mengyang; Wang, Xuemin; Zhang, Tao; Xiong, Zhao

    2017-01-01

    Phase Measuring Deflectometry (PMD) is a non-contact, high dynamic-range and full-field metrology which becomes a serious competitor to interferometry. However, the accuracy of deflectometry metrology is strongly influenced by the level of the calibrations, including test geometry, imaging pin-hole camera and digital display. In this paper, we propose a novel method that can measure optical flat surface figure to a high accuracy. We first calibrate the camera using a checker pattern shown on a LCD display at six different orientations, and the last orientation is aligned at the same position as the test optical flat. By using this method, lens distortions and the mapping relationship between the CCD pixels and the subaperture coordinates on the test optical flat can be determined at the same time. To further reduce the influence of the calibration errors on measurements, a reference optical flat with a high quality surface is measured, and then the system errors in our PMD setup can be eliminated by subtracting the figure of the reference flat from the figure of the test flat. Although any expensive coordinates measuring machine, such as laser tracker and coordinates measuring machine are not applied in our measurement, our experimental results of optical flat figure from low to high order aberrations still show a good agreement with that from the Fizeau interferometer.

  8. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  9. On the Accuracy of RAS Method in an Emergent Economy

    Directory of Open Access Journals (Sweden)

    Emilian Dobrescu

    2012-06-01

    Full Text Available The goal of this paper is to check the applicability of RAS procedure (in its conventional definition on statistical series of an emergent economy, as the Romanian one. As it is known, during transition from centrally planned system to market mechanisms, the society passes through deep restructuration, consisting in complex institutional changes, technological shifts, sectoral reallocation of productive factors, which continuously affected the input-output technical coefficients. Testing the RAS algorithm on such a volatile framework is a notable search challenge. Our empirical experiment is based on annual input-output tables for two decades (1989- 2008. In order to easier manipulate the available data base, the extended classification of economic activities containing 105 branches has been aggregated into 10 sectors. For each year, two (10x10 matrices: aij (statistically recorded technical coefficients and raij (the same coefficients estimated using RAS method were computed. The paper is organized in three sections. The first discusses several methodological issues of this algorithm. It also evaluates the differences between matrices aij and raij, involving both categories of accuracy measures - either the “cell-by-cell” comparison or the aggregated indicators. The second section extensively examines these measures, the presentation being systematized sectorally. Such an approach allows revealing specificities of different branches in their inter-industry co-operation. The third section sketches an overview of the obtained results and extracts some conclusions related to the problems that arise in the application of RAS method.

  10. Diagnostic methods I: sensitivity, specificity, and other measures of accuracy

    NARCIS (Netherlands)

    K.J. van Stralen; V.S. Stel; J.B. Reitsma; F.W. Dekker; C. Zoccali; K.J. Jager

    2009-01-01

    For most physicians, use of diagnostic tests is part of daily routine. This paper focuses on their usefulness by explaining the different measures of accuracy, the interpretation of test results, and the implementation of a diagnostic strategy. Measures of accuracy include sensitivity and specificit

  11. Accuracy of Wind Prediction Methods in the California Sea Breeze

    Science.gov (United States)

    Sumers, B. D.; Dvorak, M. J.; Ten Hoeve, J. E.; Jacobson, M. Z.

    2010-12-01

    In this study, we investigate the accuracy of measure-correlate-predict (MCP) algorithms and log law/power law scaling using data from two tall towers in coastal environments. We find that MCP algorithms accurately predict sea breeze winds and that log law/power law scaling methods struggle to predict 50-meter wind speeds. MCP methods have received significant attention as the wind industry has grown and the ability to accurately characterize the wind resource has become valuable. These methods are used to produce longer-term wind speed records from short-term measurement campaigns. A correlation is developed between the “target site,” where the developer is interested in building wind turbines, and a “reference site,” where long-term wind data is available. Up to twenty years of prior wind speeds are then are predicted. In this study, two existing MCP methods - linear regression and Mortimer’s method - are applied to predict 50-meter wind speeds at sites in the Salinas Valley and Redwood City, CA. The predictions are then verified with tall tower data. It is found that linear regression is poorly suited to MCP applications as the process produces inaccurate estimates of the cube of the wind speed at 50 meters. Meanwhile, Mortimer’s method, which bins data by direction and speed, is found to accurately predict the cube of the wind speed in both sea breeze and non-sea breeze conditions. We also find that log and power law are unstable predictors of wind speeds. While these methods produced accurate estimates of the average 50-meter wind speed at both sites, they predicted an average cube of the wind speed that was between 1.3 and 1.18 times the observed value. Inspection of time-series error reveals increased error in the mid-afternoon of the summer. This suggests that the cold sea breeze may disrupt the vertical temperature profile, create a stable atmosphere and violate the assumptions that allow log law scaling to work.

  12. New magnetotelluric inversion scheme using generalized RRI method and case studies; GRRI ho ni yoru MT o nijigen inversion kaiseki to sono tekiyorei

    Energy Technology Data Exchange (ETDEWEB)

    Yamane, K.; Takasugi, S. [GERD Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Lee, K. [University of California, Berkeley, CA (United States); Ashida, Y. [Kyoto University, Kyoto (Japan)

    1998-04-01

    This paper describes a new two-dimensional (2-D) magnetotelluric (MT) inversion scheme. For the 2-D Frechet derivative scheme, the model correction values are calculated from the Jacobian matrix after the Taylor expansion of Maxwell`s equation. Although numerical solutions with high calculation accuracy and reliability can be obtained, it requires very large computer capacity and high speed load. While, the RRI (rapid relaxation inversion) approximation scheme proposed by Smith and Booker provides high efficiency in the computer capacity and speed load. However, since horizontal changes in the electric field or magnetic field are determined only from a single observation point for the calculation of model correction values, the calculation accuracy is inferior to that by Frechet scheme. In this study, improvement in the calculation was tried with keeping the efficiency of RRI scheme. The Maxwell`s equation was modified into form of perturbation method using magnetic field or electric field and conductivity of ground. The perturbed equation was then multiplied by a test function, to relate the boundary integral and region integral. A modified equation with 2-D property similar to RRI scheme could be obtained. Thus, results similar to those from the Frechet scheme could be obtained in a period similar to that by the RRI scheme. 11 refs., 17 figs.

  13. Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods.

    Science.gov (United States)

    Haralur, Satheesh B; Saad Toman, Majed; Ali Al-Shahrani, Abdullah; Ali Al-Qarni, Abdullah

    2016-01-01

    The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps), heavy-light body (one step), monophase (one step), and polyether (one step). Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level impression techniques had the percentage of variation for the height -0.36 to -0.97%, while diameter was increased by 0.40-0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were -0.73 to -1.21%, -1.34%, and -1.46% for the height and 0.50-0.80%, 1.20%, and -1.30% for the width, respectively.

  14. Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods

    Directory of Open Access Journals (Sweden)

    Satheesh B. Haralur

    2016-01-01

    Full Text Available The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps, heavy-light body (one step, monophase (one step, and polyether (one step. Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level <0.05. The die obtained from two-step putty reline impression techniques had the percentage of variation for the height −0.36 to −0.97%, while diameter was increased by 0.40–0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were −0.73 to −1.21%, −1.34%, and −1.46% for the height and 0.50–0.80%, 1.20%, and −1.30% for the width, respectively.

  15. Accuracy of a new bedside method for estimation of circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Waever Rasmussen, J; Winther Henneberg, S

    1993-01-01

    To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume.......To evaluate the accuracy of a modification of the carbon monoxide method of estimating the circulating blood volume....

  16. Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter

    National Research Council Canada - National Science Library

    Lee, Seoung-Hyeon; Lim, Il-Kwan; Lee, Jae-Kwang

    2016-01-01

    .... However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location...

  17. THE ACCURACY AND BIAS EVALUATION OF THE USA UNEMPLOYMENT RATE FORECASTS. METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    MIHAELA BRATU (SIMIONESCU

    2012-12-01

    Full Text Available In this study some alternative forecasts for the unemployment rate of USA made by four institutions (International Monetary Fund (IMF, Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC are evaluated regarding the accuracy and the biasness. The most accurate predictions on the forecasting horizon 201-2011 were provided by IMF, followed by OECD, CBO and BC.. These results were gotten using U1 Theil’s statistic and a new method that has not been used before in literature in this context. The multi-criteria ranking was applied to make a hierarchy of the institutions regarding the accuracy and five important accuracy measures were taken into account at the same time: mean errors, mean squared error, root mean squared error, U1 and U2 statistics of Theil. The IMF, OECD and CBO predictions are unbiased. The combined forecasts of institutions’ predictions are a suitable strategy to improve the forecasts accuracy of IMF and OECD forecasts when all combination schemes are used, but INV one is the best. The filtered and smoothed original predictions based on Hodrick-Prescott filter, respectively Holt-Winters technique are a good strategy of improving only the BC expectations. The proposed strategies to improve the accuracy do not solve the problem of biasness. The assessment and improvement of forecasts accuracy have an important contribution in growing the quality of decisional process.

  18. Evaluating clinical accuracy of continuous glucose monitoring devices: other methods.

    Science.gov (United States)

    Wentholt, Iris M E; Hart, August A; Hoekstra, Joost B L; DeVries, J Hans

    2008-08-01

    With more and more continuous glucose monitoring devices entering the market, the importance of adequate accuracy assessment grows. This review discusses pros and cons of Regression Analysis and Correlation Coefficient, Relative Difference measures, Bland Altman plot, ISO criteria, combined curve fitting, and epidemiological analyses, the latter including sensitivity, specificity and positive predictive value for hypoglycaemia. Finally, recommendations for much needed head-to-head studies are given. This paper is a revised and adapted version of How to assess and compare the accuracy of continuous glucose monitors?, Diabetes Technology and Therapeutics 2007, in press, published with permission of the editor.

  19. Accuracy of genomic selection using different methods to define haplotypes

    NARCIS (Netherlands)

    Calus, M.P.L.; Meuwissen, T.H.E.; Roos, de S.; Veerkamp, R.F.

    2008-01-01

    Genomic selection uses total breeding values for juvenile animals, predicted from a large number of estimated marker haplotype effects across the whole genome. In this study the accuracy of predicting breeding values is compared for four different models including a large number of markers, at diffe

  20. Assessment of calibration methods on impedance pneumography accuracy.

    Science.gov (United States)

    Młyńczak, Marcel; Niewiadomski, Wiktor; Żyliński, Marek; Cybulski, Gerard

    2016-12-01

    The aim was to assess accuracy of tidal volumes (TV) calculated by impedance pneumography (IP), reproducibility of calibration coefficients (CC) between IP and pneumotachometry (PNT), and their relationship with body posture, breathing rate and depth. Fourteen students performed three sessions of 18 series: normal and deep breathing at 6, 10, 15 breaths/min rates, while supine, sitting and standing; 18 CC were calculated for every session. Session 2 was performed 2 months after session 1, session 3 1-3 days after session 2. TV were calculated using full or limited set of CC from current session, in case of sessions 2 and 3 also using CC from session 1 and 2, respectively. When using full set of CC from current session, IP underestimated TV by -3.2%. Using CC from session 2 for session 3 measurements caused decrease of relative difference: -3.9%, from session 1 for session 2: -5.3%; for limited set of CC: -5.0%. The body posture had significant effect on CC. The highest accuracy was obtained when all factors influencing CC were considered. The application of CC related only to body posture may result in shortening of calibration and moderate accuracy loss. Using CC from previous session compromises accuracy moderately.

  1. Use of gravity potential field methods for defining a shallow magmatic intrusion: the Mt. Amiata case history (Tuscany, Central Italy)

    Science.gov (United States)

    Girolami, Chiara; Rinaldo Barchi, Massimiliano; Pauselli, Cristina; Heyde, Ingo

    2016-04-01

    We analyzed the Bouguer gravity anomaly signal beneath the Mt. Amiata area in order to reconstruct the subsurface setting. The study area is characterized by a pronounced gravity minimum, possibly correlated with the observed anomalous heat flow and hydrothermal activity. Using different approaches, previous authors defined a low density body (generally interpreted as a magmatic intrusion) beneath this area, which could explain the observed gravity anomaly minimum. However the proposed geologic models show different geometries and densities for the batholith. The gravity data used in this study (kindly provided by eni) were acquired from different institutions (eni, OGS, USDMA and Servizio Geologico d'Italia) and collected in a unique dataset, consisting of about 50000 stations, randomly distributed, which cover Central Italy, with a spacing of less than 1 km. For each station the elevation and the Bouguer gravity anomaly data are given. From this dataset, we created two maps of the Bouguer gravity anomaly and the topography, using the Minimum Curvature gridding method considering a grid cell size of 500m x 500m. The Bouguer gravity anomaly has been computed using a density of 2.67 g/cm3. From these maps we extracted a window of about 240 km2 (12x20 km) for the study area, which includes the Mt. Amiata region and the adjacent Radicofani sedimentary basin. The first part of this study was focused on calculating the first order vertical derivative and the power spectra analysis of the Bouguer gravity anomaly to enhance the effect of shallow bodies and estimating the source depth respectively. The second part of this study was focused on constructing a 3D geological density model of the subsurface setting of the studied area, implementing a forward modelling approach. The stratigraphy of the study area's upper crust schematically consists of six litho-mechanical units, whose density was derived from velocity data collected by active seismic surveys. A preliminary

  2. AN EVALUATION OF USA UNEMPLOYMENT RATE FORECASTS IN TERMS OF ACCURACY AND BIAS. EMPIRICAL METHODS TO IMPROVE THE FORECASTS ACCURACY

    Directory of Open Access Journals (Sweden)

    BRATU (SIMIONESCU MIHAELA

    2013-02-01

    Full Text Available The most accurate forecasts for USA unemployment rate on the horizon 2001-2012, according to U1 Theil’s coefficient and to multi-criteria ranking methods, were provided by International Monetary Fund (IMF, being followed by other institutions as: Organization for Economic Co-operation and Development (OECD, Congressional Budget Office (CBO and Blue Chips (BC. The multi-criteria ranking methods were applied to solve the divergence in assessing the accuracy, differences observed by computing five chosen measures of accuracy: U1 and U2 statistics of Theil, mean error, mean squared error, root mean squared error. Some strategies of improving the accuracy of the predictions provided by the four institutions, which are biased in all cases, excepting BC, were proposed. However, these methods did not generate unbiased forecasts. The predictions made by IMF and OECD for 2001-2012 can be improved by constructing combined forecasts, the INV approach and the scheme proposed by author providing the most accurate expections. The BC forecasts can be improved by smoothing the predictions using Holt-Winters method and Hodrick - Prescott filter.

  3. Development of a control region-based mtDNA SNaPshot™ selection tool, integrated into a mini amplicon sequencing method.

    Science.gov (United States)

    Weiler, Natalie E C; de Vries, Gerda; Sijen, Titia

    2016-03-01

    Mitochondrial DNA (mtDNA) analysis is regularly applied to forensic DNA samples with limited amounts of nuclear DNA (nDNA), such as hair shafts and bones. Generally, this mtDNA analysis involves examination of the hypervariable control region by Sanger sequencing of amplified products. When samples are severely degraded, small-sized amplicons can be applied and an earlier described mini-mtDNA method by Eichmann et al. [1] that accommodates ten mini amplicons in two multiplexes is found to be a very robust approach. However, in cases with large numbers of samples, like when searching for hairs with an mtDNA profile deviant from that of the victim, the method is time (and cost) consuming. Previously, Chemale et al. [2] described a SNaPshot™-based screening tool for a Brazilian population that uses standard-size amplicons for HVS-I and HVS-II. Here, we describe a similar tool adapted to the full control region and compatible with mini-mtDNA amplicons. Eighteen single nucleotide polymorphisms (SNPs) were selected based on their relative frequencies in a European population. They showed a high discriminatory power in a Dutch population (97.2%). The 18 SNPs are assessed in two SNaPshot™ multiplexes that pair to the two mini-mtDNA amplification multiplexes. Degenerate bases are included to limit allele dropout due to SNPs at primer binding site positions. Three SNPs provide haplogroup information. Reliability testing showed no differences with Sanger sequencing results. Since mini-mtSNaPshot screening uses only a small portion of the same PCR products used for Sanger sequencing, no additional DNA extract is consumed, which is forensically advantageous.

  4. Forecasting method in multilateration accuracy based on laser tracker measurement

    Science.gov (United States)

    Aguado, Sergio; Santolaria, Jorge; Samper, David; José Aguilar, Juan

    2017-02-01

    Multilateration based on a laser tracker (LT) requires the measurement of a set of points from three or more positions. Although the LTs’ angular information is not used, multilateration produces a volume of measurement uncertainty. This paper presents two new coefficients from which to determine whether the measurement of a set of points, before performing the necessary measurements, will improve or worsen the accuracy of the multilateration results, avoiding unnecessary measurement, and reducing the time and economic cost required. The first specific coefficient measurement coefficient (MCLT) is unique for each laser tracker. It determines the relationship between the radial and angular laser tracker measurement noise. Similarly, the second coefficient is related with specific conditions of measurement β. It is related with the spatial angle between the laser tracker positions α and its effect on error reduction. Both parameters MCLT and β are linked in error reduction limits. Beside these, a new methodology to determine the multilateration reduction limit according to the multilateration technique of an ideal laser tracker distribution and a random one are presented. It provides general rules and advice from synthetic tests that are validated through a real test carried out in a coordinate measurement machine.

  5. ErrorCheck: A New Method for Controlling the Accuracy of Pose Estimates

    DEFF Research Database (Denmark)

    Holm, Preben Hagh Strunge; Petersen, Henrik Gordon

    2011-01-01

    In this paper, we present ErrorCheck, which is a new method for controlling the accuracy of a computer vision based pose refinement method. ErrorCheck consists of a way for validating robustness of a pose refinement method towards false correspondences and a way of controlling the accuracy...... of a validated pose refinement method. ErrorCheck uses a theoretical estimate of the pose error covariance both for validating robustness and controlling the accuracy.We illustrate the first usage of ErrorCheck by applying it to state-of-the-art methods for pose refinement and some variations of these methods...

  6. Fundamentals of modern statistical methods substantially improving power and accuracy

    CERN Document Server

    Wilcox, Rand R

    2001-01-01

    Conventional statistical methods have a very serious flaw They routinely miss differences among groups or associations among variables that are detected by more modern techniques - even under very small departures from normality Hundreds of journal articles have described the reasons standard techniques can be unsatisfactory, but simple, intuitive explanations are generally unavailable Improved methods have been derived, but they are far from obvious or intuitive based on the training most researchers receive Situations arise where even highly nonsignificant results become significant when analyzed with more modern methods Without assuming any prior training in statistics, Part I of this book describes basic statistical principles from a point of view that makes their shortcomings intuitive and easy to understand The emphasis is on verbal and graphical descriptions of concepts Part II describes modern methods that address the problems covered in Part I Using data from actual studies, many examples are include...

  7. On the accuracy of low-order projection methods

    OpenAIRE

    Paul Pichler

    2007-01-01

    We use low-order projection methods to compute numerical solutions of the basic neoclassical stochastic growth model. We assess the quality of the obtained solutions, and compare them to numerical approximations derived with first and second-order perturbation techniques. We show that projection methods perform surprisingly poor when the degree of approximation is very low, and we provide some intuition behind this finding.

  8. Study of accuracy of precipitation measurements using simulation method

    Science.gov (United States)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    of wind shield improve the accuracy of precipitation measurements? · Try to find the source of the error that can be detected at tipping bucket raingauge in winter time because of use of heating power? On our poster we would like to present the answers to the questions listed above.

  9. Subsurface modeling of geothermal manifestation in Mt. Endut based on vertical electrical sounding (VES) method

    Science.gov (United States)

    Permadi, A. N.; Akbar, A. M.; Wildan, D.; Sobirin, R.; Supriyanto

    2017-07-01

    The Endut geothermal prospect area is located in Lebak district, Banten province, about 40 km in the southern Rangkasbitung city. This area has been surveyed by PSDG (Pusat Sumber Daya Geologi) since 2006. In this survey, data acquisition has been performed by using the resistivity methods with Schlumberger configuration from southwest to northeast. Local hot spring Cikawah (CKW) manifestation dominated by quaternary volcanic rocks of Mount Endut product that intruded tertiary sedimentary bedrock. Horizontal fault and normal trend rejuvenation of the northeast - southwest was expected control hot spring manifestation in Cikawah. Geothermal manifestations such as hot water Cikawah has the highest temperature (88 °C), the hot water discharge 5 L/sec, neutral pH, chloride type, in partial equlibrium, and there are in the between of the balance of Cl-Li-B. Resistivity data shows conductive layer at a depth of approximately 500 meters below Cikawah hot spring, which is suspected to be associated with the argillic alteration intrusive rocks. The high resistivity anomaly is suspected to be associated with thick igneous intrusive rocks.

  10. Ensemble Methods in Data Mining Improving Accuracy Through Combining Predictions

    CERN Document Server

    Seni, Giovanni

    2010-01-01

    This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although e

  11. An accuracy measurement method for star trackers based on direct astronomic observation

    Science.gov (United States)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  12. An accuracy measurement method for star trackers based on direct astronomic observation

    Science.gov (United States)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  13. Landslide prediction using combined deterministic and probabilistic methods in hilly area of Mt. Medvednica in Zagreb City, Croatia

    Science.gov (United States)

    Wang, Chunxiang; Watanabe, Naoki; Marui, Hideaki

    2013-04-01

    The hilly slopes of Mt. Medvednica are stretched in the northwestern part of Zagreb City, Croatia, and extend to approximately 180km2. In this area, landslides, e.g. Kostanjek landslide and Črešnjevec landslide, have brought damage to many houses, roads, farmlands, grassland and etc. Therefore, it is necessary to predict the potential landslides and to enhance landslide inventory for hazard mitigation and security management of local society in this area. We combined deterministic method and probabilistic method to assess potential landslides including their locations, size and sliding surfaces. Firstly, this study area is divided into several slope units that have similar topographic and geological characteristics using the hydrology analysis tool in ArcGIS. Then, a GIS-based modified three-dimensional Hovland's method for slope stability analysis system is developed to identify the sliding surface and corresponding three-dimensional safety factor for each slope unit. Each sliding surface is assumed to be the lower part of each ellipsoid. The direction of inclination of the ellipsoid is considered to be the same as the main dip direction of the slope unit. The center point of the ellipsoid is randomly set to the center point of a grid cell in the slope unit. The minimum three-dimensional safety factor and corresponding critical sliding surface are also obtained for each slope unit. Thirdly, since a single value of safety factor is insufficient to evaluate the slope stability of a slope unit, the ratio of the number of calculation cases in which the three-dimensional safety factor values less than 1.0 to the total number of trial calculation is defined as the failure probability of the slope unit. If the failure probability is more than 80%, the slope unit is distinguished as 'unstable' from other slope units and the landslide hazard can be mapped for the whole study area.

  14. Integrating TDEM and MT methods for characterization and delineation of the Santa Catarina aquifer (Chalco Sub-Basin, Mexico)

    Science.gov (United States)

    Krivochieva, Stefi; Chouteau, Michel

    2003-01-01

    Magnetotelluric (MT) and time domain electromagnetic (TDEM) surveys were undertaken in the region of Santa Catarina, located in the Chalco Sub-Basin of the Mexico Basin. The objective was to constrain the geometry of the fresh water aquifer and confirm the continuity of the basaltic flows between the volcano and the sedimentary basin. In order to define the stratification at depth with an emphasis on the geometry of the main aquifer, 11 MT and 5 TDEM soundings were recorded along a north-south profile. Interpretation of MT soundings show that the bedrock is located at a depth of at least 800-1000 m. Using TDEM apparent resistivity curves to constrain the high frequency MT data, three main layers were defined overlying the bedrock. These layers are, from the surface to bottom, a 20- to 40-m-thick layer of sands, ash and clay, followed by a very conductive 200-m-thick layer of sand and ash, saturated with highly mineralized water and, finally, a zone with gradually increasing resistivities, corresponding to the main aquifer. The TDEM data, the magnetic transfer functions and the 2D MT model also indicate that a shallow resistive structure is dipping, from the northwest, into the lacustrine deposits of the basin. This feature is likely to be highly permeable fractured basaltic flows, evidenced also in one of the water wells. To verify the presence of fractured basalts below the volcano ranges, 38 TDEM soundings were collected on the flanks of the Santa Catarina range. Layered models obtained from the TDEM soundings enabled an assessment of a major conductive zone (1-10 Ω m) at depth. Two hypothesis are envisaged and the nature of this zone is attributed either to a clayey layer or to fractured basaltic flows. If the latter possibility is confirmed, this continuous zone could provide a channel by which the water contaminated by the Santa Catarina landfill may leak into the basin.

  15. Potential of accuracy profile for method validation in inductively coupled plasma spectrochemistry

    Science.gov (United States)

    Mermet, J. M.; Granier, G.

    2012-10-01

    Method validation is usually performed over a range of concentrations for which analytical criteria must be verified. One important criterion in quantitative analysis is accuracy, i.e. the contribution of both trueness and precision. The study of accuracy over this range is called an accuracy profile and provides experimental tolerance intervals. Comparison with acceptability limits fixed by the end user defines a validity domain. This work describes the computation involved in the building of the tolerance intervals, particularly for the intermediate precision with within-laboratory experiments and for the reproducibility with interlaboratory studies. Computation is based on ISO 5725-4 and on previously published work. Moreover, the bias uncertainty is also computed to verify the bias contribution to accuracy. The various types of accuracy profile behavior are exemplified with results obtained by using ICP-MS and ICP-AES. This procedure allows the analyst to define unambiguously a validity domain for a given accuracy. However, because the experiments are time-consuming, the accuracy profile method is mainly dedicated to method validation.

  16. [Influence of interpolation method and sampling number on spatial prediction accuracy of soil Olsen-P].

    Science.gov (United States)

    Sun, Yi-xiang; Wu, Chuan-zhou; Zhu, Ke-bao; Cui, Zhen-ling; Chen, Xin-ping; Zhang, Fu-suo

    2009-03-01

    Different from the large scale farm management in Europe and America, the scattered farmland management in China made the spatial variability of soil nutrients at county scale in this country more challenging. Taking soil Olsen-P in Wuhu County as an example, the influence of interpolation method and sampling number on the spatial prediction accuracy of soil nutrients was evaluated systematically. The results showed that local polynomial method, ordinary kriging, simple kriging, and disjunctive kriging had higher spatial prediction accuracy than the other interpolation methods. Considering of its simplicity, ordinary kriging was recommended to evaluate the spatial variability of soil Olsen-P within a county. The spatial prediction accuracy would increase with increasing soil sampling number. Taking the spatial prediction accuracy and soil sampling cost into consideration, the optimal sampling number should be from 500 to 1000 to evaluate the spatial variability of soil Olsen-P at county scale.

  17. Comparison of whole genome prediction accuracy across generations using parametric and semi parametric methods

    Directory of Open Access Journals (Sweden)

    Abbas Atefi

    2016-11-01

    Full Text Available Accuracy of genomic prediction was compared using three parametric and semi parametric methods, including BayesA, Bayesian LASSO and Reproducing kernel Hilbert spaces regression under various levels of heritability (0.15, 0.3 and 0.45, different number of markers (500, 750 and 1000 and generation intervals of validating set. A historical population of 1000 individuals with equal sex ratio was simulated for 100 generations at constant size. It followed by 100 extra generations of gradually reducing size down to 500 individuals in generation 200. Individuals of generation 200 were mated randomly for 10 more generations applying litter size of 5 to expand the historical generation. Finally, 50 males and 500 females chosen from generation 210 were randomly mated to generate 10 more generations of recent population. Individuals born in generation 211 considered as the training set while the validation set was composed of individuals either from generations 213, 215 or 217. The genome comprised one chromosome of 100 cM length carrying 50 QTLs. There was no significant difference between accuracy of investigated methods (p > 0.05 but among three methods, the highest mean accuracy (0.659 was observed for BayesA. By increasing the heritability, the average genomic accuracy increased from 0.53 to 0.75 (p < 0.05. The number of SNPs affected the accuracy and accuracies increased as number of SNPs increased; therefore, the highest accuracy was for the case number of SNPs=1000. With getting away from validating set, the accuracies decreased and the most severe decay observed in the case of low heritability. Decreasing the accuracy across generations affected by marker density but was independent from investigated methods.

  18. Detection of the heterogeneous O-glycosylation profile of MT1-MMP expressed in cancer cells by a simple MALDI-MS method.

    Directory of Open Access Journals (Sweden)

    Takuya Shuo

    Full Text Available BACKGROUND: Glycosylation is an important and universal post-translational modification for many proteins, and regulates protein functions. However, simple and rapid methods to analyze glycans on individual proteins have not been available until recently. METHODS/PRINCIPAL FINDINGS: A new technique to analyze glycopeptides in a highly sensitive manner by matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS using the liquid matrix 3AQ/CHCA was developed recently and we optimized this technique to analyze a small amount of transmembrane protein separated by SDS-PAGE. We used the MALDI-MS method to evaluate glycosylation status of membrane-type 1 matrix metalloproteinase (MT1-MMP. O-glycosylation of MT1-MMP is reported to modulate its protease activity and thereby to affect cancer cell invasion. MT1-MMP expressed in human fibrosarcoma HT1080 cells was immunoprecipitated and resolved by SDS-PAGE. After in-gel tryptic digestion of the protein, a single droplet of the digest was applied directly to the liquid matrix on a MALDI target plate. Concentration of hydrophilic glycopeptides within the central area occurred due to gradual evaporation of the sample solution, whereas nonglycosylated hydrophobic peptides remained at the periphery. This specific separation and concentration of the glycopeptides enabled comprehensive analysis of the MT1-MMP O-glycosylation. CONCLUSIONS/SIGNIFICANCE: We demonstrate, for the first time, heterogeneous O-glycosylation profile of a protein by a whole protein analysis using MALDI-MS. Since cancer cells are reported to have altered glycosylation of proteins, this easy-to-use method for glycopeptide analysis opens up the possibility to identify specific glycosylation patterns of proteins that can be used as new biomarkers for malignant tumors.

  19. Characterization of Deep Geothermal Energy Resources in Low enthalpy sedimentary basins in Belgium using Electro-Magnetic Methods – CSEM and MT results

    OpenAIRE

    Coppo, Nicolas; DARNET, Mathieu; Harcouet-Menou, Virginie; Wawrzyniak, Pierre; Manzella, Adele; Bretaudeau, François; G. Romano; Lagrou, D.; Girard, Jean-Francois

    2016-01-01

    International audience; Sedimentary basins in Northwest Europe have significant potential for low to medium enthalpy, deep geothermal energy resources. These resources are generally assessed using standard seismic exploration techniques to resolve geological structures. The ElectroMagnetic campaign carried-out in Mol area (Belgium) has shown that despite the presence of high level of industrialization, the resistivity of deep formations (>3km) can be recovered from MT and CSEM methods and hen...

  20. 改良蔗糖衬垫法提取苎麻mtDNA%Optimized Protocol for Ramie mtDNA Extraction Based on Sucrose-mediated Sedimentation Method

    Institute of Scientific and Technical Information of China (English)

    侯思名; 翟书华; 岑晓江; 杨晓虹; 窦玉敏; 曾黎琼; 刘飞虎

    2010-01-01

    苎麻组织中含有较多的酚类、粘状物及黄酮类等次生代谢物质,其mtDNA的提取比较困难.本实验中采用蔗糖衬垫法提取苎麻mtDNA.提取过程中往缓冲液中加入抗氧化性物质来减轻DNA 的褐变;同时在酚-氯仿/异戊醇抽提过程中加入高盐溶液,用以去除多糖;且采用擦镜纸而不是纱布过滤可提高mtDNA的得率.mtDNA的ISSR检测结果表明,蔗糖衬垫法提取mtDNA时,辅助加入抗氧化剂和高盐溶液效果较好.

  1. Multiple Staggered Mesh Ewald: Boosting the Accuracy of the Smooth Particle Mesh Ewald Method

    CERN Document Server

    Wang, Han; Fang, Jun

    2016-01-01

    The smooth particle mesh Ewald (SPME) method is the standard method for computing the electrostatic interactions in the molecular simulations. In this work, the multiple staggered mesh Ewald (MSME) method is proposed to boost the accuracy of the SPME method. Unlike the SPME that achieves higher accuracy by refining the mesh, the MSME improves the accuracy by averaging the standard SPME forces computed on, e.g. $M$, staggered meshes. We prove, from theoretical perspective, that the MSME is as accurate as the SPME, but uses $M^2$ times less mesh points in a certain parameter range. In the complementary parameter range, the MSME is as accurate as the SPME with twice of the interpolation order. The theoretical conclusions are numerically validated both by a uniform and uncorrelated charge system, and by a three-point-charge water system that is widely used as solvent for the bio-macromolecules.

  2. Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states

    Directory of Open Access Journals (Sweden)

    Grünewald Stefan

    2011-01-01

    Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction

  3. Tank Fire Control Systems Accuracy Assignment Based on Fuzzy Comprehensive Judgment Method

    Institute of Scientific and Technical Information of China (English)

    刘文果; 陈杰; 窦丽华

    2003-01-01

    A method of accuracy assignment based on fuzzy comprehensive judgment method (FCJM) in tank fire control system is proposed. From the flowing route of the error sources and their respective correlative signals, the transfer functions of several sources are analysed by means of mathematic simulation, and FCJM is applied to obtain the cost comprehensive factor for each part of system, combining its error sensitivity factor the mathematical model is built to solve the accuracy assignment problem. Simulation result shows the proposed method can help designer of tank fire control system work out an optimal system more efficiently and more economically.

  4. The Matrix Element Method at next-to-leading order accuracy

    CERN Document Server

    Martini, Till

    2015-01-01

    The Matrix Element Method (MEM) has proven beneficial to make maximal use of the information available in experimental data. However, so far it has mostly been used in Born approximation only. In this paper we discuss an extension to NLO accuracy. As a prerequisite we present an efficient method to calculate event weights for jet events at NLO accuracy. As illustration and proof of concept we apply the method to the extraction of the top-quark mass in e+e- annihilation. We observe significant differences when moving from LO to NLO which may be relevant for the interpretation of top-quark mass measurements at hadron colliders relying on the MEM.

  5. Relative accuracy of three common methods of parentage analysis in natural populations.

    Science.gov (United States)

    Harrison, Hugo B; Saenz-Agudelo, Pablo; Planes, Serge; Jones, Geoffrey P; Berumen, Michael L

    2013-02-01

    Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations.

  6. Relative accuracy of three common methods of parentage analysis in natural populations

    KAUST Repository

    Harrison, Hugo B.

    2012-12-27

    Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes\\' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes\\' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.

  7. Accuracy of two geocoding methods for geographic information system-based exposure assessment in epidemiological studies.

    Science.gov (United States)

    Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice

    2017-02-24

    Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding

  8. Diagnostic accuracy of existing methods for identifying diabetic foot ulcers from inpatient and outpatient datasets

    Directory of Open Access Journals (Sweden)

    Budiman-Mak Elly

    2010-11-01

    Full Text Available Abstract Background As the number of persons with diabetes is projected to double in the next 25 years in the US, an accurate method of identifying diabetic foot ulcers in population-based data sources are ever more important for disease surveillance and public health purposes. The objectives of this study are to evaluate the accuracy of existing methods and to propose a new method. Methods Four existing methods were used to identify all patients diagnosed with a foot ulcer in a Department of Veterans Affairs (VA hospital from the inpatient and outpatient datasets for 2003. Their electronic medical records were reviewed to verify whether the medical records positively indicate presence of a diabetic foot ulcer in diagnoses, medical assessments, or consults. For each method, five measures of accuracy and agreement were evaluated using data from medical records as the gold standard. Results Our medical record reviews show that all methods had sensitivity > 92% but their specificity varied substantially between 74% and 91%. A method used in Harrington et al. (2004 was the most accurate with 94% sensitivity and 91% specificity and produced an annual prevalence of 3.3% among VA users with diabetes nationwide. A new and simpler method consisting of two codes (707.1× and 707.9 shows an equally good accuracy with 93% sensitivity and 91% specificity and 3.1% prevalence. Conclusions Our results indicate that the Harrington and New methods are highly comparable and accurate. We recommend the Harrington method for its accuracy and the New method for its simplicity and comparable accuracy.

  9. Study on the accuracy of comprehensive evaluating method based on fuzzy set theory

    Institute of Scientific and Technical Information of China (English)

    Xu Weixiang; Liu Xumin

    2005-01-01

    The evaluation method and its accuracy for evaluating complex systems are considered. In order to evaluate accurately complex systems, the existed evaluating methods are simply analyzed, and a new comprehensive evaluating method is developed. The new method is integration of Delphi approach, analytic hierarchy process, gray interconnect degree and fuzzy evaluation (DHGF). Its theory foundation is the meta-synthesis methodology from qualitative analysis to quantitative analysis. According to fuzzy set approach, using the methods of concordance of evaluation, redundant verify, double models redundant, and limitations of the method etc, the accuracy of evaluating method of DHGF is estimated, and a practical example is given. The result shows that using the method to evaluate complex system projects is feasible and credible.

  10. Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography

    Science.gov (United States)

    Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie

    2017-03-01

    Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.

  11. INCREASING MEASUREMENT ACCURACY IN ELECTRO-OPTICAL METHOD FOR MEASURING VELOCITY OF DETONATION

    Directory of Open Access Journals (Sweden)

    Mario Dobrilović

    2014-12-01

    Full Text Available In addition to other detonation parameters detonation velocity is a value that provides indirect information on the strength i.e. brisance of an explosive and explosive performance. In addition to that, detonation velocity is a value which can be measured in a relatively simpler and more precise manner, by developed and accessible methods when compared to other detonation parameters Due to its simple use, compact instruments and satisfactory accuracy, electro-optical method of detonation velocity measurement is widely used. The paper describes the electro-optical measurement method and points out the factors that affect its accuracy. The accuracy of measurement is increased and measurement uncertainty is reduced by the measurement result analysis with the application of different measurement setups.

  12. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    Science.gov (United States)

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  13. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy

    Science.gov (United States)

    Zhao, Yi-jiao; Xiong, Yu-xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial “line-laser” scanner (Faro), as the reference model and two test models were obtained, via a “stereophotography” (3dMD) and a “structured light” facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and “3D error” as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use. PMID:28056044

  14. Scale effect and methods for accuracy evaluation of attribute information loss in rasterization

    Institute of Scientific and Technical Information of China (English)

    BAI Yan; LIAO Shunbao; SUN Jiulin

    2011-01-01

    Rasterization is a conversion process accompanied with information loss,which includes the loss of features' shape,structure,position,attribute and so on.Two chief factors that affect estimating attribute accuracy loss in rasterization are grid cell size and evaluating method.That is,attribute accuracy loss in rasterization has a close relationship with grid cell size; besides,it is also influenced by evaluating methods.Therefore,it is significant to analyze these two influencing factors comprehensively.Taking land cover data of Sichuan at the scale of 1:250,000 in 2005 as a case,in view of data volume and its processing time of the study region,this study selects 16 spatial scales from 600 m to 30 km,uses rasterizing method based on the Rule of Maximum Area (RMA) in ArcGIS and two evaluating methods of attribute accuracy loss,which are Normal Analysis Method (NAM) and a new Method Based on Grid Cell (MBGC),respectively,and analyzes the scale effect of attribute (it is area here) accuracy loss at 16 different scales by these two evaluating methods comparatively.The results show that:(1) At the same scale,average area accuracy loss of the entire study region evaluated by MBGC is significantly larger than the one estimated using NAM.Moreover,this discrepancy between the t.wo is obvious in the range of 1 km to 10 km.When the grid cell is larger than 10 km,average area accuracy losses calculated by the two evaluating methods are stable,even tended to parallel.(2) MBGC can not only estimate RMA rasterization attribute accuracy loss accurately,but can express the spatial distribution of the loss objectively.(3) The suitable scale domain for RMA rasterization of land cover data of Sichuan at the scale of 1:250,000 in 2005 is better equal to or less than 800 m,in which the data volume is favorable and the processing time is not too long,as well as the area accuracy loss is less than 2.5%.

  15. An Universal Modeling Method for Enhancing the Volumetric Accuracy of CNC Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Volumetric error modeling method is an important te ch nique for enhancement the accuracy of CNC machine tools by error compensation. I n the research field, the main question is how to find an universal kinematics m odeling method for different kinds of NC machine tools. Multi-body system theor y is always used to solve the dynamics problem of complex physical system. But t ill now, the error factors that always exist in practice system is still not con sidered. In this paper, the accuracy kinematics of MB...

  16. Estimation of the Accuracy of Method for Quantitative Determination of Volatile Compounds in Alcohol Products

    CERN Document Server

    Charepitsa, S V; Zadreyko, Y V; Sytova, S N

    2016-01-01

    Results of the estimation of the precision for determination volatile compounds in alcohol-containing products by gas chromatography: acetaldehyde, methyl acetate, ethyl acetate, methanol, isopropyl alcohol, propyl alcohol, isobutyl alcohol, butyl alcohol, isoamyl alcohol are presented. To determine the accuracy, measurements were planned in accordance with ISO 5725 and held at the gas chromatograph Crystal-5000. Standard deviation of repeatability, intermediate precision and their limits are derived from obtained experimental data. The uncertainty of the measurements was calculated on the base of an "empirical" method. The obtained values of accuracy indicate that the developed method allows measurement uncertainty extended from 2 to 20% depending on the analyzed compound and measured concentration.

  17. Studies of the accuracy of time integration methods for reaction-diffusion equations

    Science.gov (United States)

    Ropp, David L.; Shadid, John N.; Ober, Curtis C.

    2004-03-01

    In this study we present numerical experiments of time integration methods applied to systems of reaction-diffusion equations. Our main interest is in evaluating the relative accuracy and asymptotic order of accuracy of the methods on problems which exhibit an approximate balance between the competing component time scales. Nearly balanced systems can produce a significant coupling of the physical mechanisms and introduce a slow dynamical time scale of interest. These problems provide a challenging test for this evaluation and tend to reveal subtle differences between the various methods. The methods we consider include first- and second-order semi-implicit, fully implicit, and operator-splitting techniques. The test problems include a prototype propagating nonlinear reaction-diffusion wave, a non-equilibrium radiation-diffusion system, a Brusselator chemical dynamics system and a blow-up example. In this evaluation we demonstrate a "split personality" for the operator-splitting methods that we consider. While operator-splitting methods often obtain very good accuracy, they can also manifest a serious degradation in accuracy due to stability problems.

  18. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy

    OpenAIRE

    Zhao, Yi-jiao; Xiong, Yu-xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial “line-laser” scanner (Faro), as the reference model and two test models were obtained, via a “stereophotography” (3dMD) and a “structured light” facial scanner (FaceScan) ...

  19. A SVD-based method to assess the uniqueness and accuracy of SPECT geometrical calibration.

    Science.gov (United States)

    Ma, Tianyu; Yao, Rutao; Shao, Yiping; Zhou, Rong

    2009-12-01

    Geometrical calibration is critical to obtaining high resolution and artifact-free reconstructed image for SPECT and CT systems. Most published calibration methods use analytical approach to determine the uniqueness condition for a specific calibration problem, and the calibration accuracy is often evaluated through empirical studies. In this work, we present a general method to assess the characteristics of both the uniqueness and the quantitative accuracy of the calibration. The method uses a singular value decomposition (SVD) based approach to analyze the Jacobian matrix from a least-square cost function for the calibration. With this method, the uniqueness of the calibration can be identified by assessing the nonsingularity of the Jacobian matrix, and the estimation accuracy of the calibration parameters can be quantified by analyzing the SVD components. A direct application of this method is that the efficacy of a calibration configuration can be quantitatively evaluated by choosing a figure-of-merit, e.g., the minimum required number of projection samplings to achieve desired calibration accuracy. The proposed method was validated with a slit-slat SPECT system through numerical simulation studies and experimental measurements with point sources and an ultra-micro hot-rod phantom. The predicted calibration accuracy from the numerical studies was confirmed by the experimental point source calibrations at approximately 0.1 mm for both the center of rotation (COR) estimation of a rotation stage and the slit aperture position (SAP) estimation of a slit-slat collimator by an optimized system calibration protocol. The reconstructed images of a hot rod phantom showed satisfactory spatial resolution with a proper calibration and showed visible resolution degradation with artificially introduced 0.3 mm COR estimation error. The proposed method can be applied to other SPECT and CT imaging systems to analyze calibration method assessment and calibration protocol

  20. Metallothionein (MT)-III

    DEFF Research Database (Denmark)

    Carrasco, J; Giralt, M; Molinero, A

    1999-01-01

    Metallothionein-III is a low molecular weight, heavy-metal binding protein expressed mainly in the central nervous system. First identified as a growth inhibitory factor (GIF) of rat cortical neurons in vitro, it has subsequently been shown to be a member of the metallothionein (MT) gene family...... and renamed as MT-III. In this study we have raised polyclonal antibodies in rabbits against recombinant rat MT-III (rMT-III). The sera obtained reacted specifically against recombinant zinc-and cadmium-saturated rMT-III, and did not cross-react with native rat MT-I and MT-II purified from the liver of zinc...... injected rats. The specificity of the antibody was also demonstrated in immunocytochemical studies by the elimination of the immunostaining by preincubation of the antibody with brain (but not liver) extracts, and by the results obtained in MT-III null mice. The antibody was used to characterize...

  1. Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images

    Science.gov (United States)

    Tahar, K. N.

    2015-08-01

    Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.

  2. Effects of CT image segmentation methods on the accuracy of long bone 3D reconstructions.

    Science.gov (United States)

    Rathnayaka, Kanchana; Sahama, Tony; Schuetz, Michael A; Schmutz, Beat

    2011-03-01

    An accurate and accessible image segmentation method is in high demand for generating 3D bone models from CT scan data, as such models are required in many areas of medical research. Even though numerous sophisticated segmentation methods have been published over the years, most of them are not readily available to the general research community. Therefore, this study aimed to quantify the accuracy of three popular image segmentation methods, two implementations of intensity thresholding and Canny edge detection, for generating 3D models of long bones. In order to reduce user dependent errors associated with visually selecting a threshold value, we present a new approach of selecting an appropriate threshold value based on the Canny filter. A mechanical contact scanner in conjunction with a microCT scanner was utilised to generate the reference models for validating the 3D bone models generated from CT data of five intact ovine hind limbs. When the overall accuracy of the bone model is considered, the three investigated segmentation methods generated comparable results with mean errors in the range of 0.18-0.24 mm. However, for the bone diaphysis, Canny edge detection and Canny filter based thresholding generated 3D models with a significantly higher accuracy compared to those generated through visually selected thresholds. This study demonstrates that 3D models with sub-voxel accuracy can be generated utilising relatively simple segmentation methods that are available to the general research community.

  3. Factors affecting the accuracy of endoscopic transpapillary sampling methods for bile duct cancer.

    Science.gov (United States)

    Nishikawa, Takao; Tsuyuguchi, Toshio; Sakai, Yuji; Sugiyama, Harutoshi; Tawada, Katsunobu; Mikata, Rintaro; Tada, Motohisa; Ishihara, Takeshi; Miyazaki, Masaru; Yokosuka, Osamu

    2014-03-01

    Various methods for endoscopic transpapillary sampling have been developed. However, the factors affecting the accuracy of these methods for bile duct cancer are unknown. The aim of the present study was to determine the factors affecting the accuracy of endoscopic transpapillary sampling methods. We reviewed the results from 101 patients with bile duct cancer who underwent transpapillary sampling by aspiration bile cytology, brushing cytology, and fluoroscopic forceps biopsy. The final diagnosis of bile duct cancer was made on the basis of pathological evaluation of specimens obtained at surgery and the clinical course over at least 1 year in patients not operated on. We carried out subgroup analyses for the factors affecting the accuracy of each transpapillary sampling method. Aspiration bile cytology was carried out 238 times in 77 patients, brushing cytology was carried out 67 times in 60patients, and fluoroscopic forceps biopsy was carried out 64 times in 53 patients. Accuracies of aspiration bile cytology were significantly higher for longer (≥15 mm) biliary cancerous lesions than for shorter (sampling methods are more accurate for longer or elevated (non-flat) biliary cancerous lesions than for shorter or flat lesions. © 2013 The Authors. Digestive Endoscopy © 2013 Japan Gastroenterological Endoscopy Society.

  4. The Second—Order Imaginary Plane Method and Its Calculation Accuracy

    Institute of Scientific and Technical Information of China (English)

    DongPeng; ChenXiaodong; 等

    1997-01-01

    In this paper,the basic principles and mathematical process of the Second-order Imaginary Plane Method(IPM) for modeling the radiative haet transfer are analysed and proved in detail.Through the numerical computation for the real process of the radiative heat transfer using the Second-order IPM,the previous IPM(the first-order),the Analytic Method and the Zone Method,it was shown that the calculation accuracy of the Second-order IPM is much higher than that of the first-order IPM,and its demand to computer capacity and time consuming is much lower than that of of the Zone Method.It is verified that the method is more effective and has a higher accuracy for modeling radiative heat transfer in engineering.

  5. The effect of atmospheric and topographic correction methods on land cover classification accuracy

    Science.gov (United States)

    Vanonckelen, Steven; Lhermitte, Stefaan; Van Rompaey, Anton

    2013-10-01

    Mapping of vegetation in mountain areas based on remote sensing is obstructed by atmospheric and topographic distortions. A variety of atmospheric and topographic correction methods has been proposed to minimize atmospheric and topographic effects and should in principle lead to a better land cover classification. Only a limited number of atmospheric and topographic combinations has been tested and the effect on class accuracy and on different illumination conditions is not yet researched extensively. The purpose of this study was to evaluate the effect of coupled correction methods on land cover classification accuracy. Therefore, all combinations of three atmospheric (no atmospheric correction, dark object subtraction and correction based on transmittance functions) and five topographic corrections (no topographic correction, band ratioing, cosine correction, pixel-based Minnaert and pixel-based C-correction) were applied on two acquisitions (2009 and 2010) of a Landsat image in the Romanian Carpathian mountains. The accuracies of the fifteen resulting land cover maps were evaluated statistically based on two validation sets: a random validation set and a validation subset containing pixels present in the difference area between the uncorrected classification and one of the fourteen corrected classifications. New insights into the differences in classification accuracy were obtained. First, results showed that all corrected images resulted in higher overall classification accuracies than the uncorrected images. The highest accuracy for the full validation set was achieved after combination of an atmospheric correction based on transmittance functions and a pixel-based Minnaert topographic correction. Secondly, class accuracies of especially the coniferous and mixed forest classes were enhanced after correction. There was only a minor improvement for the other land cover classes (broadleaved forest, bare soil, grass and water). This was explained by the position

  6. Interlaboratory diagnostic accuracy of a Salmonella specific PCR-based method

    DEFF Research Database (Denmark)

    Malorny, B.; Hoorfar, Jeffrey; Hugas, M.;

    2003-01-01

    A collaborative study involving four European laboratories was conducted to investigate the diagnostic accuracy of a Salmonella specific PCR-based method, which was evaluated within the European FOOD-PCR project (http://www.pcr.dk). Each laboratory analysed by the PCR a set of independent obtaine...

  7. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    Science.gov (United States)

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  8. On the accuracy of the finite difference method for applications in beam propagating techniques

    NARCIS (Netherlands)

    Hoekstra, Hugo; Krijnen, Gijsbertus J.M.; Lambeck, Paul

    1992-01-01

    In this paper it is shown that the inaccuracy in the beam propagation method based on the finite difference scheme, introduced by the use of the slowly varying envelope approximation, can be overcome in an effective way. By the introduction of a perturbation expansion the accuracy can be improved as

  9. Evaluation of the accuracy of the Multiple Support Response Spectrum (MSRS) method

    DEFF Research Database (Denmark)

    Konakli, Katerina; Der Kiureghian, A.

    2012-01-01

    The MSRS rule is a response spectrum method for analysis of multiply supported structures subjected to spatially varying ground motions. This paper evaluates the accuracy of the MSRS rule by comparing MSRS estimates of mean peak responses with corresponding “exact” mean values obtained by time-hi...

  10. On the Teaching Methods to Balance Fluency & Accuracy for Chinese University English Learners

    Institute of Scientific and Technical Information of China (English)

    CHEN Jia

    2016-01-01

    By dividing English learners in Chinese university context into intermediate and advanced level learners, the essay ar-gues that fluency rather than accuracy is needed for most university students. It further discusses some suitable teaching methods in both contexts respectively to balance this pair of objective focus.

  11. ESTIMATE ACCURACY OF NONLINEAR COEFFICIENTS OF SQUEEZEFILM DAMPER USING STATE VARIABLE FILTER METHOD

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The estimate model for a nonlinear system of squeeze-film damper (SFD) is described.The method of state variable filter (SVF) is used to estimate the coefficients of SFD.The factors which are critical to the estimate accuracy are discussed.

  12. Accuracy of methods to measure femoral head penetration within metal-backed acetabular components.

    Science.gov (United States)

    Callary, Stuart A; Solomon, Lucian B; Holubowycz, Oksana T; Campbell, David G; Howie, Donald W

    2016-06-30

    A number of different software programs are used to investigate the in vivo wear of polyethylene bearings in total hip arthroplasty. With wear rates below 0.1 mm/year now commonly being reported for highly cross-linked polyethylene (XLPE) components, it is important to identify the accuracy of the methods used to measure such small movements. The aims of this study were to compare the accuracy of current software programs used to measure two-dimensional (2D) femoral head penetration (FHP) and to determine whether the accuracy is influenced by larger femoral heads or by different methods of representing the acetabular component within radiostereometric analysis (RSA). A hip phantom was used to compare known movements of the femoral head within a metal-backed acetabular component to FHP measured radiographically using RSA, Hip Analysis Suite (HAS), PolyWare, Ein Bild Roentgen Analyse (EBRA), and Roentgen Monographic Analysis Tool (ROMAN). RSA was significantly more accurate than the HAS, PolyWare, and ROMAN methods when measuring 2D FHP with a 28 mm femoral head. Femoral head size influenced the accuracy of HAS and ROMAN 2D FHP measurements, EBRA proximal measurements, and RSA measurements in the proximal and anterior direction. The use of different acetabular reference segments did not influence accuracy of RSA measurements. The superior accuracy and reduced variability of RSA wear measurements allow much smaller cohorts to be used in RSA clinical wear studies than those utilizing other software programs. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res.

  13. Accuracy Analysis of Geopotential Coefficients Recovered from In-situ Disturbing Potential by Energy Conservation Method

    Institute of Scientific and Technical Information of China (English)

    ZOU Xiancai; LI Jiancheng; LUO Jia; XU Xinyu

    2007-01-01

    The characteristics of the normal equation created in recovering the Earth gravity model (EGM) by least-squares(LS) adjustment from the in-situ disturbing potential is discussed in detail. It can be concluded that the normal equation only depends on the orbit, and the choice of a priori gravity model has no effect on the LS solution. Therefore, the accuracy of the recovered gravity model can be accurately simulated. Starting from this point, four sets of disturbing potential along the orbit with different level of noise were simulated and were used to recover the EGM. The results show that on the current accuracy level of the accelerometer calibration, the accuracy of the EGM is not sufficient to reflect the time variability of the Earth's gravity field, as the dynamic method revealed.

  14. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    Science.gov (United States)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  15. Diagnostic test accuracy: methods for systematic review and meta-analysis.

    Science.gov (United States)

    Campbell, Jared M; Klugar, Miloslav; Ding, Sandrine; Carmody, Dennis P; Hakonsen, Sasja J; Jadotte, Yuri T; White, Sarahlouise; Munn, Zachary

    2015-09-01

    Systematic reviews are carried out to provide an answer to a clinical question based on all available evidence (published and unpublished), to critically appraise the quality of studies, and account for and explain variations between the results of studies. The Joanna Briggs Institute specializes in providing methodological guidance for the conduct of systematic reviews and has developed methods and guidance for reviewers conducting systematic reviews of studies of diagnostic test accuracy. Diagnostic tests are used to identify the presence or absence of a condition for the purpose of developing an appropriate treatment plan. Owing to demands for improvements in speed, cost, ease of performance, patient safety, and accuracy, new diagnostic tests are continuously developed, and there are often several tests available for the diagnosis of a particular condition. In order to provide the evidence necessary for clinicians and other healthcare professionals to make informed decisions regarding the optimum test to use, primary studies need to be carried out on the accuracy of diagnostic tests and the results of these studies synthesized through systematic review. The Joanna Briggs Institute and its international collaboration have updated, revised, and developed new guidance for systematic reviews, including systematic reviews of diagnostic test accuracy. This methodological article summarizes that guidance and provides detailed advice on the effective conduct of systematic reviews of diagnostic test accuracy.

  16. A method for improving the accuracy of automatic indexing of Chinese-English mixed documents

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHAO; Hui; SHI

    2012-01-01

    Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.

  17. The Comparison of Three Methods of Detecting mt-DNA Mutations in Leber's Hereditary Optic Neuropathy%三种方法在检测Leber's视神经萎缩mt-DNA突变的比较

    Institute of Scientific and Technical Information of China (English)

    贾小云; 郭莉; 肖学珊; 郭向明; 申煌煊; 黎仕强; 张清炯

    2000-01-01

    比较MSP-PCR、SSCP、RFLP在检测Lebers视神经萎缩(LHON)线粒体DNA(mt-DNA)11778突变中的优缺点.77例受试者,分别用MSP-PCR、SSCP、RFLP法检测mt-DNA11778突变.被MSP-PCR检出的48例阳性者同时也被SSCP检出,另外SSCP还检出6例杂合性突变;MSP-PCR和SSCP检测阳性的9例患者,再用RFLP(MaeⅢ)检测也得出一致结果.表明多条件SSCP在mt-DNA未知突变筛查中具有优势,而MSP-PCR有助于确定突变的性质.

  18. Integrating Knowledge Bases and Statistics in MT

    CERN Document Server

    Knight, K; Haines, M G; Hatzivassiloglou, V; Hovy, E; Iida, M; Luk, S K; Okumura, A; Whitney, R; Yamada, K; Knight, Kevin; Chander, Ishwar; Haines, Matthew; Hatzivassiloglou, Vasileios; Hovy, Eduard; Iida, Masayo; Luk, Steve K.; Okumura, Akitoshi; Whitney, Richard; Yamada, Kenji

    1994-01-01

    We summarize recent machine translation (MT) research at the Information Sciences Institute of USC, and we describe its application to the development of a Japanese-English newspaper MT system. Our work aims at scaling up grammar-based, knowledge-based MT techniques. This scale-up involves the use of statistical methods, both in acquiring effective knowledge resources and in making reasonable linguistic choices in the face of knowledge gaps.

  19. Accuracy assessment of the UT1 prediction method based on 100-year series analysis

    CERN Document Server

    Malkin, Z; Tolstikov, A

    2013-01-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole coordinates. The method is based on construction of a general polyharmonic model of the variations of the Earth rotation parameters using all the data available for the last 80-100 years, and modified autoregression technique. In this presentation, a detailed comparison was made of real-time UT1 predictions computed making use of this method in 2006-2010 with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS). Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  20. A new method for measuring the rotational accuracy of rolling element bearings

    Science.gov (United States)

    Chen, Ye; Zhao, Xiangsong; Gao, Weiguo; Hu, Gaofeng; Zhang, Shizhen; Zhang, Dawei

    2016-12-01

    The rotational accuracy of a machine tool spindle has critical influence upon the geometric shape and surface roughness of finished workpiece. The rotational performance of the rolling element bearings is a main factor which affects the spindle accuracy, especially in the ultra-precision machining. In this paper, a new method is developed to measure the rotational accuracy of rolling element bearings of machine tool spindles. Variable and measurable axial preload is applied to seat the rolling elements in the bearing races, which is used to simulate the operating conditions. A high-precision (radial error is less than 300 nm) and high-stiffness (radial stiffness is 600 N/μm) hydrostatic reference spindle is adopted to rotate the inner race of the test bearing. To prevent the outer race from rotating, a 2-degrees of freedom flexure hinge mechanism (2-DOF FHM) is designed. Correction factors by using stiffness analysis are adopted to eliminate the influences of 2-DOF FHM in the radial direction. Two capacitive displacement sensors with nano-resolution (the highest resolution is 9 nm) are used to measure the radial error motion of the rolling element bearing, without separating the profile error as the traditional rotational accuracy metrology of the spindle. Finally, experimental measurements are performed at different spindle speeds (100-4000 rpm) and axial preloads (75-780 N). Synchronous and asynchronous error motion values are evaluated to demonstrate the feasibility and repeatability of the developed method and instrument.

  1. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  2. Fragmentation of DNA affects the accuracy of the DNA quantitation by the commonly used methods

    Directory of Open Access Journals (Sweden)

    Sedlackova Tatiana

    2013-02-01

    Full Text Available Abstract Background Specific applications and modern technologies, like non-invasive prenatal testing, non-invasive cancer diagnostic and next generation sequencing, are currently in the focus of researchers worldwide. These have common characteristics in use of highly fragmented DNA molecules for analysis. Hence, for the performance of molecular methods, DNA concentration is a crucial parameter; we compared the influence of different levels of DNA fragmentation on the accuracy of DNA concentration measurements. Results In our comparison, the performance of the currently most commonly used methods for DNA concentration measurement (spectrophotometric, fluorometric and qPCR based were tested on artificially fragmented DNA samples. In our comparison, unfragmented and three specifically fragmented DNA samples were used. According to our results, the level of fragmentation did not influence the accuracy of spectrophotometric measurements of DNA concentration, while other methods, fluorometric as well as qPCR-based, were significantly influenced and a decrease in measured concentration was observed with more intensive DNA fragmentation. Conclusions Our study has confirmed that the level of fragmentation of DNA has significant impact on accuracy of DNA concentration measurement with two of three mostly used methods (PicoGreen and qPCR. Only spectrophotometric measurement was not influenced by the level of fragmentation, but sensitivity of this method was lowest among the three tested. Therefore if it is possible the DNA quantification should be performed with use of equally fragmented control DNA.

  3. On the accuracy of Whitham's method. [for steady ideal gas flow past cones

    Science.gov (United States)

    Zahalak, G. I.; Myers, M. K.

    1974-01-01

    The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.

  4. Improving the accuracy of multiple integral evaluation by applying Romberg's method

    Science.gov (United States)

    Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.

    2009-02-01

    Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.

  5. Method of Improving the Navigation Accuracy of SINS by Continuous Rotation

    Institute of Scientific and Technical Information of China (English)

    YANG Yong; MIAO Ling-juan; SHEN Jun

    2005-01-01

    A method of improving the navigation accuracy of strapdown inertial navigation system (SINS) is studied. The particular technique discussed involves the continuous rotation of gyros and accelerometers cluster about the vertical axis of the vehicle. Then the errors of these sensors will have periodic variation corresponding to components along the body frame. Under this condition, the modulated sensor errors produce reduced system errors. Theoretical analysis based on a new coordinate system defined as sensing frame and test results are presented, and they indicate the method attenuates the navigation errors brought by the gyros' random constant drift and the accelerometer's bias and their white noise compared to the conventional method.

  6. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    Science.gov (United States)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  7. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    Science.gov (United States)

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  8. About accuracy of the discrimination parameter estimation for the dual high-energy method

    Science.gov (United States)

    Osipov, S. P.; Chakhlov, S. V.; Osipov, O. S.; Shtein, A. M.; Strugovtsev, D. V.

    2015-04-01

    A set of the mathematical formulas to estimate the accuracy of discrimination parameters for two implementations of the dual high energy method - by the effective atomic number and by the level lines is given. The hardware parameters which influenced on the accuracy of the discrimination parameters are stated. The recommendations to form the structure of the high energy X-ray radiation impulses are formulated. To prove the applicability of the proposed procedure there were calculated the statistical errors of the discrimination parameters for the cargo inspection system of the Tomsk polytechnic university on base of the portable betatron MIB-9. The comparison of the experimental estimations and the theoretical ones of the discrimination parameter errors was carried out. It proved the practical applicability of the algorithm to estimate the discrimination parameter errors for the dual high energy method.

  9. Link Prediction Methods and Their Accuracy for Different Social Networks and Network Metrics

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2015-01-01

    Full Text Available Currently, we are experiencing a rapid growth of the number of social-based online systems. The availability of the vast amounts of data gathered in those systems brings new challenges that we face when trying to analyse it. One of the intensively researched topics is the prediction of social connections between users. Although a lot of effort has been made to develop new prediction approaches, the existing methods are not comprehensively analysed. In this paper we investigate the correlation between network metrics and accuracy of different prediction methods. We selected six time-stamped real-world social networks and ten most widely used link prediction methods. The results of the experiments show that the performance of some methods has a strong correlation with certain network metrics. We managed to distinguish “prediction friendly” networks, for which most of the prediction methods give good performance, as well as “prediction unfriendly” networks, for which most of the methods result in high prediction error. Correlation analysis between network metrics and prediction accuracy of prediction methods may form the basis of a metalearning system where based on network characteristics it will be able to recommend the right prediction method for a given network.

  10. [Assessment of overall spatial accuracy in image guided stereotactic body radiotherapy using a spine registration method].

    Science.gov (United States)

    Nakazawa, Hisato; Uchiyama, Yukio; Komori, Masataka; Hayashi, Naoki

    2014-06-01

    Stereotactic body radiotherapy (SBRT) for lung and liver tumors is always performed under image guidance, a technique used to confirm the accuracy of setup positioning by fusing planning digitally reconstructed radiographs with X-ray, fluoroscopic, or computed tomography (CT) images, using bony structures, tumor shadows, or metallic markers as landmarks. The Japanese SBRT guidelines state that bony spinal structures should be used as the main landmarks for patient setup. In this study, we used the Novalis system as a linear accelerator for SBRT of lung and liver tumors. The current study compared the differences between spine registration and target registration and calculated total spatial accuracy including setup uncertainty derived from our image registration results and the geometric uncertainty of the Novalis system. We were able to evaluate clearly whether overall spatial accuracy is achieved within a setup margin (SM) for planning target volume (PTV) in treatment planning. After being granted approval by the Hospital and University Ethics Committee, we retrospectively analyzed eleven patients with lung tumor and seven patients with liver tumor. The results showed the total spatial accuracy to be within a tolerable range for SM of treatment planning. We therefore regard our method to be suitable for image fusion involving 2-dimensional X-ray images during the treatment planning stage of SBRT for lung and liver tumors.

  11. Stability, accuracy and numerical diffusion analysis of nodal expansion method for steady convection diffusion equation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xiafeng, E-mail: zhou-xf11@mails.tsinghua.edu.cn; Guo, Jiong, E-mail: guojiong12@tsinghua.edu.cn; Li, Fu, E-mail: lifu@tsinghua.edu.cn

    2015-12-15

    Highlights: • NEMs are innovatively applied to solve convection diffusion equation. • Stability, accuracy and numerical diffusion for NEM are analyzed for the first time. • Stability and numerical diffusion depend on the NEM expansion order and its parity. • NEMs have higher accuracy than both second order upwind and QUICK scheme. • NEMs with different expansion orders are integrated into a unified discrete form. - Abstract: The traditional finite difference method or finite volume method (FDM or FVM) is used for HTGR thermal-hydraulic calculation at present. However, both FDM and FVM require the fine mesh sizes to achieve the desired precision and thus result in a limited efficiency. Therefore, a more efficient and accurate numerical method needs to be developed. Nodal expansion method (NEM) can achieve high accuracy even on the coarse meshes in the reactor physics analysis so that the number of spatial meshes and computational cost can be largely decreased. Because of higher efficiency and accuracy, NEM can be innovatively applied to thermal-hydraulic calculation. In the paper, NEMs with different orders of basis functions are successfully developed and applied to multi-dimensional steady convection diffusion equation. Numerical results show that NEMs with three or higher order basis functions can track the reference solutions very well and are superior to second order upwind scheme and QUICK scheme. However, the false diffusion and unphysical oscillation behavior are discovered for NEMs. To explain the reasons for the above-mentioned behaviors, the stability, accuracy and numerical diffusion properties of NEM are analyzed by the Fourier analysis, and by comparing with exact solutions of difference and differential equation. The theoretical analysis results show that the accuracy of NEM increases with the expansion order. However, the stability and numerical diffusion properties depend not only on the order of basis functions but also on the parity of

  12. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    Directory of Open Access Journals (Sweden)

    Asadian S

    2016-09-01

    Full Text Available Simin Asadian,1 Alireza Khatony,1 Gholamreza Moradi,2 Alireza Abdi,1 Mansour Rezaei,3 1Nursing and Midwifery School, Kermanshah University of Medical Sciences, 2Department of Anesthesiology, 3Biostatistics & Epidemiology Department, Kermanshah University of Medical Sciences, Kermanshah, Iran Introduction: An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods: In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results: There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001. Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%. Paired t-test demonstrated an acceptable precision with forehead (P=0.132, left (P=0.18 and right (P=0.318 tympanic membranes, oral (P=1.00, and axillary (P=1.00 methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion: The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left for

  13. K{sub 0}-INAA method accuracy using Zn as comparator

    Energy Technology Data Exchange (ETDEWEB)

    Bedregal, P., E-mail: pbedregal@ipen.gob.p [Instituto Peruano de Energia Nuclear (IPEN), Av. Canada 1470, Sn. Borja 1470, Lima 41 (Peru); Mendoza, P.; Ubillus, M. [Instituto Peruano de Energia Nuclear (IPEN), Av. Canada 1470, Sn. Borja 1470, Lima 41 (Peru); Montoya, E., E-mail: emontoya@ipen.gob.p [Instituto Peruano de Energia Nuclear (IPEN), Av. Canada 1470, Sn. Borja 1470, Lima 41 (Peru)

    2010-10-11

    An evaluation of the accuracy in the application of the k{sub 0}-INAA method using Zn foil as comparator is presented. A good agreement was found in the precision within analysts and between them, as well as in the assessment of trueness for most elements. The determination of important experimental parameters like gamma peak counting efficiency, {gamma}-{gamma} true coincidence, comparator preparation and quality assurance/quality control is also described and discussed.

  14. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    Science.gov (United States)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  15. DEVELOPMENT OF A METHOD STATISTICAL ANALYSIS ACCURACY AND PROCESS STABILITY PRODUCTION OF EPOXY RESIN ED-20

    Directory of Open Access Journals (Sweden)

    N. V. Zhelninskaya

    2015-01-01

    Full Text Available Statistical methods play an important role in the objective evaluation of quantitative and qualitative characteristics of the process and are one of the most important elements of the quality assurance system production and total quality management process. To produce a quality product, one must know the real accuracy of existing equipment, to determine compliance with the accuracy of a selected technological process specified accuracy products, assess process stability. Most of the random events in life, particularly in manufacturing and scientific research, are characterized by the presence of a large number of random factors, is described by a normal distribution, which is the main in many practical studies. Modern statistical methods is quite difficult to grasp and wide practical use without in-depth mathematical training of all participants in the process. When we know the distribution of a random variable, you can get all the features of this batch of products, to determine the mean value and the variance. Using statistical control methods and quality control in the analysis of accuracy and stability of the technological process of production of epoxy resin ED20. Estimated numerical characteristics of the law of distribution of controlled parameters and determined the percentage of defects of the investigated object products. For sustainability assessment of manufacturing process of epoxy resin ED-20 selected Shewhart control charts, using quantitative data, maps of individual values of X and sliding scale R. Using Pareto charts identify the causes that affect low dynamic viscosity in the largest extent. For the analysis of low values of dynamic viscosity were the causes of defects using Ishikawa diagrams, which shows the most typical factors of the variability of the results of the process. To resolve the problem, it is recommended to modify the polymer composition of carbon fullerenes and to use the developed method for the production of

  16. Predicting the accuracy of ligand overlay methods with Random Forest models.

    Science.gov (United States)

    Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J

    2008-12-01

    The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.

  17. Improving the diagnostic accuracy of dysplastic and melanoma lesions using the decision template combination method.

    Science.gov (United States)

    Faal, Maryam; Miran Baygi, Mohammad Hossein; Kabir, Ehsanollah

    2013-02-01

    Melanoma is the most dangerous type of skin cancer, and early detection of suspicious lesions can decrease the mortality rate of this cancer. In this article, we present a multi-classifier system for improving the diagnostic accuracy of melanoma and dysplastic lesions based on the decision template combination rule. First, the lesion is differentiated from the surrounding healthy skin in an image. Next, shape, colour and texture features are extracted from the lesion image. Different subsets of these features are fed to three different classifiers: k-nearest neighbour (k-NN), support vector machine (SVM) and linear discriminant analysis (LDA). The decision template method is used to combine the outputs of these classifiers. The proposed method has been evaluated on a set of 436 dermatoscopic images of benign, dysplastic and melanoma lesions. The final classifier ensemble delivers a total classification accuracy of 80.46%, with 67.73% of dysplastic lesions correctly classified and 83.53% of melanoma lesions correctly classified. The results show that the proposed method significantly increases the diagnostic accuracy of dysplastic and melanoma lesions compared with a single classifier. The total classification rate is also improved. © 2012 John Wiley & Sons A/S.

  18. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    Science.gov (United States)

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  19. Accuracy of the estimates of ammonia concentration in rumen fluid using different analytical methods

    Directory of Open Access Journals (Sweden)

    N.K.P. Souza

    2013-12-01

    Full Text Available The accuracy of two different methods in measuring the ammonia nitrogen (N-NH3 concentration in rumen fluid were evaluated: a catalyzed indophenol colorimetric reaction (CICR and the Kjeldahl distillation (KD. Five buffered standard solutions containing volatile fatty acids, true protein, and known ammonia concentrations (0, 3, 6, 12, and 24 N-NH3 mg/dL were used to simulate rumen fluid. Different ratios (10:1, 7.5:1, 5:1, 2.5:1, 1:1, 1:2.5, 1:5, 1:7.5, and 1:10 of a potassium hydroxide solution (KOH, 2 mol/L to standard solutions were evaluated by the KD method. The accuracy of each method was evaluated by adjusting a simple linear regression model of the estimated N-NH3 concentrations on the N-NH3 concentrations in the standard solutions. When the KD method was used, N-NH3 was observed to be released from the deamination of true protein (P0.05. The estimates of the N-NH3 concentration obtained by the CICR method were found to be accurate (P>0.05. After the accuracy evaluation, ninety-three samples of rumen fluid were evaluated by the CICR and KD methods (using the 5:1 ratio of KOH solution to rumen fluid sample, assuming that the CICR estimates would be accurate. The N-NH3 concentrations obtained by the two methods were observed to be different (P<0.05 but strongly correlated (r = 0.9701. Thus, it was concluded that the estimates obtained by the Kjeldahl distillation using a 5:1 ratio of KOH solution to rumen fluid sample can be adjusted to avoid biases. Furthermore, a model to adjust the N-NH3 concentration is suggested.

  20. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  1. Impact of consensus contours from multiple PET segmentation methods on the accuracy of functional volume delineation

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, A. [Saarland University Medical Centre, Department of Nuclear Medicine, Homburg (Germany); Vermandel, M. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); CHU Lille, Nuclear Medicine Department, Lille (France); Baillet, C. [CHU Lille, Nuclear Medicine Department, Lille (France); Dewalle-Vignion, A.S. [U1189 - ONCO-THAI - Image Assisted Laser Therapy for Oncology, University of Lille, Inserm, CHU Lille, Lille (France); Modzelewski, R.; Vera, P.; Gardin, I. [Centre Henri-Becquerel and LITIS EA4108, Rouen (France); Massoptier, L.; Parcq, C.; Gibon, D. [AQUILAB, Research and Innovation Department, Loos Les Lille (France); Fechter, T.; Nestle, U. [University Medical Center Freiburg, Department for Radiation Oncology, Freiburg (Germany); German Cancer Consortium (DKTK) Freiburg and German Cancer Research Center (DKFZ), Heidelberg (Germany); Nemer, U. [University Medical Center Freiburg, Department of Nuclear Medicine, Freiburg (Germany)

    2016-05-15

    The aim of this study was to evaluate the impact of consensus algorithms on segmentation results when applied to clinical PET images. In particular, whether the use of the majority vote or STAPLE algorithm could improve the accuracy and reproducibility of the segmentation provided by the combination of three semiautomatic segmentation algorithms was investigated. Three published segmentation methods (contrast-oriented, possibility theory and adaptive thresholding) and two consensus algorithms (majority vote and STAPLE) were implemented in a single software platform (Artiview registered). Four clinical datasets including different locations (thorax, breast, abdomen) or pathologies (primary NSCLC tumours, metastasis, lymphoma) were used to evaluate accuracy and reproducibility of the consensus approach in comparison with pathology as the ground truth or CT as a ground truth surrogate. Variability in the performance of the individual segmentation algorithms for lesions of different tumour entities reflected the variability in PET images in terms of resolution, contrast and noise. Independent of location and pathology of the lesion, however, the consensus method resulted in improved accuracy in volume segmentation compared with the worst-performing individual method in the majority of cases and was close to the best-performing method in many cases. In addition, the implementation revealed high reproducibility in the segmentation results with small changes in the respective starting conditions. There were no significant differences in the results with the STAPLE algorithm and the majority vote algorithm. This study showed that combining different PET segmentation methods by the use of a consensus algorithm offers robustness against the variable performance of individual segmentation methods and this approach would therefore be useful in radiation oncology. It might also be relevant for other scenarios such as the merging of expert recommendations in clinical routine and

  2. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  3. Development of a method of ICP algorithm accuracy improvement during shaped profiles and surfaces control

    Directory of Open Access Journals (Sweden)

    V.A. Pechenin

    2014-10-01

    Full Text Available In this paper we propose a method of improvement of operating accuracy of iterative closest point algorithm used for metrology problems solving when determining a location deviation. Compressor blade profiles of a gas turbine engine (GTE were used as an object for application of the method of deviation determining. It is proposed to formulate the problem of the best alignment in the developed method as a multiobjective problem including criteria of minimum of squared distances, normal vectors differences and depth of camber differences at corresponding points of aligned profiles. Variants of resolving the task using an integral criterion including the above-mentioned were considered. Optimization problems were solved using a quasi- Newton method of sequential quadratic programming. The proposed new method of improvement of the registration algorithm based on geometric features showed greater accuracy in comparison with the discussed methods of optimization of a distance between fitting points, especially if a small quantity of measurement points on the profiles was used.

  4. Accuracy and repeatability of a new method for measuring facet loads in the lumbar spine.

    Science.gov (United States)

    Wilson, Derek C; Niosi, Christina A; Zhu, Qingan A; Oxland, Thomas R; Wilson, David R

    2006-01-01

    We assessed the repeatability and accuracy of a relatively new, resistance-based sensor (Tekscan 6900) for measuring lumbar spine facet loads, pressures, and contact areas in cadaver specimens. Repeatability of measurements in the natural facet joint was determined for five trials of four specimens loaded in pure moment (+/- 7.5 N m) flexibility tests in axial rotation and flexion-extension. Accuracy of load measurements in four joints was assessed by applying known compressive loads of 25, 50, and 100 N to the natural facet joint in a materials testing machine and comparing the known applied load to the measured load. Measurements of load were obtained using two different calibration approaches: linear and two-point calibrations. Repeatability for force, pressure, and area (average of standard deviation as a percentage of the mean for all trials over all specimens) was 4-6% for axial rotation and 7-10% for extension. Peak resultant force in axial rotation was 30% smaller when calculated using the linear calibration method. The Tekscan sensor overestimated the applied force by 18 +/- 9% (mean+/-standard deviation), 35 +/- 7% and 50 +/- 9% for compressive loads of 100, 50, and 25 N, respectively. The two-point method overestimated the loads by 35 +/- 16%, 45 +/- 7%, and 56 +/- 10% for the same three loads. Our results show that the Tekscan sensor is repeatable. However, the sensor measurement range is not optimal for the small loads transmitted by the facets and measurement accuracy is highly dependent on calibration protocol.

  5. Complex shape product tolerance and accuracy control method for virtual assembly

    Science.gov (United States)

    Ma, Huiping; Jin, Yuanqiang; Zhang, Xiaoguang; Zhou, Hai

    2015-02-01

    The simulation of virtual assembly process for engineering design lacks of accuracy in the software of three-dimension CAD at present. Product modeling technology with tolerance, assembly precision preanalysis technique and precision control method are developed. To solve the problem of lack of precision information transmission in CAD, tolerance mathematical model of Small Displacement Torsor (SDT) is presented, which can bring about technology transfer and establishment of digital control function for geometric elements from the definition, description, specification to the actual inspection and evaluation process. Current tolerance optimization design methods for complex shape product are proposed for optimization of machining technology, effective cost control and assembly quality of the products.

  6. The Accuracy of Diagnostic Methods for Diabetic Retinopathy: A Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Vicente Martínez-Vizcaíno

    Full Text Available The objective of this study was to evaluate the accuracy of the recommended glycemic measures for diagnosing diabetic retinopathy.We systematically searched MEDLINE, EMBASE, the Cochrane Library, and the Web of Science databases from inception to July 2015 for observational studies comparing the diagnostic accuracy of glycated hemoglobin (HbA1c, fasting plasma glucose (FPG, and 2-hour plasma glucose (2h-PG. Random effects models for the diagnostic odds ratio (dOR value computed by Moses' constant for a linear model and 95% CIs were used to calculate the accuracy of the test. Hierarchical summary receiver operating characteristic curves (HSROC were used to summarize the overall test performance.Eleven published studies were included in the meta-analysis. The pooled dOR values for the diagnosis of retinopathy were 16.32 (95% CI 13.86-19.22 for HbA1c and 4.87 (95% CI 4.39-5.40 for FPG. The area under the HSROC was 0.837 (95% CI 0.781-0.892 for HbA1c and 0.735 (95% CI 0.657-0.813 for FPG. The 95% confidence region for the point that summarizes the overall test performance of the included studies occurs where the cut-offs ranged from 6.1% (43.2 mmol/mol to 7.8% (61.7 mmol/mol for HbA1c and from 7.8 to 9.3 mmol/L for FPG. In the four studies that provided information regarding 2h-PG, the pooled accuracy estimates for HbA1c were similar to those of 2h-PG; the overall performance for HbA1c was superior to that for FPG.The three recommended tests for the diagnosis of type 2 diabetes in nonpregnant adults showed sufficient accuracy for their use in clinical settings, although the overall accuracy for the diagnosis of retinopathy was similar for HbA1c and 2h-PG, which were both more accurate than for FPG. Due to the variability and inconveniences of the glucose level-based methods, HbA1c appears to be the most appropriate method for the diagnosis diabetic retinopathy.

  7. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    Science.gov (United States)

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (psilicone material than the other groups (psilicone impression materials was affected by either the material or mixing variables.

  8. The Leech method for diagnosing constipation: intra- and interobserver variability and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Lorijn, Fleur de; Voskuijl, Wieger P.; Taminiau, Jan A.; Benninga, Marc A. [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Rijn, Rick R. van; Henneman, Onno D.F. [Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Heijmans, Jarom [Emma Children' s Hospital, Department of Paediatric Gastroenterology and Nutrition, Amsterdam (Netherlands); Academic Medical Centre, Department of Radiology, Amsterdam (Netherlands); Reitsma, Johannes B. [Academic Medical Centre, Department of Clinical Epidemiology and Biostatistics, Amsterdam (Netherlands)

    2006-01-01

    The data concerning the value of a plain abdominal radiograph in childhood constipation are inconsistent. Recently, positive results have been reported of a new radiographic scoring system, ''the Leech method'', for assessing faecal loading. To assess intra- and interobserver variability and determine diagnostic accuracy of the Leech method in identifying children with functional constipation (FC). A total of 89 children (median age 9.8 years) with functional gastrointestinal disorders were included in the study. Based on clinical parameters, 52 fulfilled the criteria for FC, six fulfilled the criteria for functional abdominal pain (FAP), and 31 for functional non-retentive faecal incontinence (FNRFI); the latter two groups provided the controls. To assess intra- and interobserver variability of the Leech method three scorers scored the same abdominal radiograph twice. A Leech score of 9 or more was considered as suggestive of constipation. ROC analysis was used to determine the diagnostic accuracy of the Leech method in separating patients with FC from control patients. Significant intraobserver variability was found between two scorers (P=0.005 and P<0.0001), whereas there was no systematic difference between the two scores of the other scorer (P=0.89). The scores between scorers differed systematically and displayed large variability. The area under the ROC curve was 0.68 (95% CI 0.58-0.80), indicating poor diagnostic accuracy. The Leech scoring method for assessing faecal loading on a plain abdominal radiograph is of limited value in the diagnosis of FC in children. (orig.)

  9. Determining Accuracy of Thermal Dissipation Methods-based Sap Flux in Japanese Cedar Trees

    Science.gov (United States)

    Su, Man-Ping; Shinohara, Yoshinori; Laplace, Sophie; Lin, Song-Jin; Kume, Tomonori

    2017-04-01

    Thermal dissipation method, one kind of sap flux measurement method that can estimate individual tree transpiration, have been widely used because of its low cost and uncomplicated operation. Although thermal dissipation method is widespread, the accuracy of this method is doubted recently because some tree species materials in previous studies were not suitable for its empirical formula from Granier due to difference of wood characteristics. In Taiwan, Cryptomeria japonica (Japanese cedar) is one of the dominant species in mountainous area, quantifying the transpiration of Japanese cedar trees is indispensable to understand water cycling there. However, no one have tested the accuracy of thermal dissipation methods-based sap flux for Japanese cedar trees in Taiwan. Thus, in this study we conducted calibration experiment using twelve Japanese cedar stem segments from six trees to investigate the accuracy of thermal dissipation methods-based sap flux in Japanese cedar trees in Taiwan. By pumping water from segment bottom to top and inserting probes into segments to collect data simultaneously, we compared sap flux densities calculated from real water uptakes (Fd_actual) and empirical formula (Fd_Granier). Exact sapwood area and sapwood depth of each sample were obtained from dying segment with safranin stain solution. Our results showed that Fd_Granier underestimated 39 % of Fd_actual across sap flux densities ranging from 10 to 150 (cm3m-2s-1); while applying sapwood depth corrected formula from Clearwater, Fd_Granier became accurately that only underestimated 0.01 % of Fd_actual. However, when sap flux densities ranging from 10 to 50 (cm3m-2s-1)which is similar with the field data of Japanese cedar trees in a mountainous area of Taiwan, Fd_Granier underestimated 51 % of Fd_actual, and underestimated 26 % with applying Clearwater sapwood depth corrected formula. These results suggested sapwood depth significantly impacted on the accuracy of thermal dissipation

  10. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  11. Subpixel Accuracy Analysis of Phase Correlation Shift Measurement Methods Applied to Satellite Imagery

    Directory of Open Access Journals (Sweden)

    S.M. Badwai

    2013-01-01

    Full Text Available the key point of super resolution process is the accurate measuring of sub-pixel shift. Any tiny error in measuring such shift leads to an incorrect image focusing. In this paper, methodology of measuring sub-pixel shift using Phase correlation (PC are evaluated using different window functions, then modified version of (PC method using high pass filter (HPF is introduced . Comprehensive analysis and assessment of (PC methods shows that different natural features yield different shift measurements. It is concluded that there is no universal window function for measuring shift; it mainly depends on the features in the satellite images. Even the question of which window is optimal of particular feature is generally remains open. This paper presents the design of a method for obtaining high accuracy sub pixel shift phase correlation using (HPF.The proposed method makes the change in the different locations that lack of edges easy.

  12. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    Science.gov (United States)

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  13. Multivariate regional frequency analysis: Two new methods to increase the accuracy of measures

    Science.gov (United States)

    Abdi, Amin; Hassanzadeh, Yousef; Talatahari, Siamak; Fakheri-Fard, Ahmad; Mirabbasi, Rasoul; Ouarda, Taha B. M. J.

    2017-09-01

    The accurate detection of discordant sites in a heterogeneous region and the estimation of the regional parameters of a statistical distribution are two important issues in multivariate regional frequency analysis. In this study, two new methods are proposed for increasing the accuracy of the multivariate L-moment approach. The first one, the optimization-based method (OBM) is utilized to estimate the best distribution parameters. The second one is the rank-based method (RBM), which is used in the robust discordancy measure for identifying discordant sites. In order to assess the performance of the proposed approaches on the heterogeneity measure, real and simulated regions of drought characteristics are considered. The results confirm the usefulness of the new methods in comparison with some well-established techniques.

  14. A sampling method for estimating the accuracy of predicted breeding values in genetic evaluation

    Directory of Open Access Journals (Sweden)

    Laloë Denis

    2001-09-01

    Full Text Available Abstract A sampling-based method for estimating the accuracy of estimated breeding values using an animal model is presented. Empirical variances of true and estimated breeding values were estimated from a simulated n-sample. The method was validated using a small data set from the Parthenaise breed with the estimated coefficient of determination converging to the true values. It was applied to the French Salers data file used for the 2000 on-farm evaluation (IBOVAL of muscle development score. A drawback of the method is its computational demand. Consequently, convergence can not be achieved in a reasonable time for very large data files. Two advantages of the method are that a it is applicable to any model (animal, sire, multivariate, maternal effects... and b it supplies off-diagonal coefficients of the inverse of the mixed model equations and can therefore be the basis of connectedness studies.

  15. Novel Resistance Measurement Method: Analysis of Accuracy and Thermal Dependence with Applications in Fiber Materials

    Directory of Open Access Journals (Sweden)

    Silvia Casans

    2016-12-01

    Full Text Available Material resistance is important since different physicochemical properties can be extracted from it. This work describes a novel resistance measurement method valid for a wide range of resistance values up to 100 GΩ at a low powered, small sized, digitally controlled and wireless communicated device. The analog and digital circuits of the design are described, analysing the main error sources affecting the accuracy. Accuracy and extended uncertainty are obtained for a pattern decade box, showing a maximum of 1 % accuracy for temperatures below 30 ∘ C in the range from 1 MΩ to 100 GΩ. Thermal analysis showed stability up to 50 ∘ C for values below 10 GΩ and systematic deviations for higher values. Power supply V i applied to the measurement probes is also analysed, showing no differences in case of the pattern decade box, except for resistance values above 10 GΩ and temperatures above 35 ∘ C. To evaluate the circuit behaviour under fiber materials, an 11-day drying process in timber from four species (Oregon pine-Pseudotsuga menziesii, cedar-Cedrus atlantica, ash-Fraxinus excelsior, chestnut-Castanea sativa was monitored. Results show that the circuit, as expected, provides different resistance values (they need individual conversion curves for different species and the same ambient conditions. Additionally, it was found that, contrary to the decade box analysis, V i affects the resistance value due to material properties. In summary, the proposed circuit is able to accurately measure material resistance that can be further related to material properties.

  16. Novel Resistance Measurement Method: Analysis of Accuracy and Thermal Dependence with Applications in Fiber Materials.

    Science.gov (United States)

    Casans, Silvia; Rosado-Muñoz, Alfredo; Iakymchuk, Taras

    2016-12-14

    Material resistance is important since different physicochemical properties can be extracted from it. This work describes a novel resistance measurement method valid for a wide range of resistance values up to 100 GΩ at a low powered, small sized, digitally controlled and wireless communicated device. The analog and digital circuits of the design are described, analysing the main error sources affecting the accuracy. Accuracy and extended uncertainty are obtained for a pattern decade box, showing a maximum of 1 % accuracy for temperatures below 30 ∘ C in the range from 1 MΩ to 100 GΩ. Thermal analysis showed stability up to 50 ∘ C for values below 10 GΩ and systematic deviations for higher values. Power supply V i applied to the measurement probes is also analysed, showing no differences in case of the pattern decade box, except for resistance values above 10 GΩ and temperatures above 35 ∘ C. To evaluate the circuit behaviour under fiber materials, an 11-day drying process in timber from four species (Oregon pine-Pseudotsuga menziesii, cedar-Cedrus atlantica, ash-Fraxinus excelsior, chestnut-Castanea sativa) was monitored. Results show that the circuit, as expected, provides different resistance values (they need individual conversion curves) for different species and the same ambient conditions. Additionally, it was found that, contrary to the decade box analysis, V i affects the resistance value due to material properties. In summary, the proposed circuit is able to accurately measure material resistance that can be further related to material properties.

  17. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  18. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  19. Accuracy Analysis for 6-DOF PKM with Sobol Sequence Based Quasi Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Jianguang Li; Jian Ding; Lijie Guo; Yingxue Yao; Zhaohong Yi; Huaijing Jing; Honggen Fang

    2015-01-01

    To improve the precisions of pose error analysis for 6⁃dof parallel kinematic mechanism ( PKM) during assembly quality control, a Sobol sequence based on Quasi Monte Carlo ( QMC) method is introduced and implemented in pose accuracy analysis for the PKM in this paper. The Sobol sequence based on Quasi Monte Carlo with the regularity and uniformity of samples in high dimensions, can prevail traditional Monte Carlo method with up to 98�59% and 98�25% enhancement for computational precision of pose error statistics. Then a PKM tolerance design system integrating this method is developed and with it pose error distributions of the PKM within a prescribed workspace are finally obtained and analyzed.

  20. The Impact of the Implementation of Edge Detection Methods on the Accuracy of Automatic Voltage Reading

    Science.gov (United States)

    Sidor, Kamil; Szlachta, Anna

    2017-04-01

    The article presents the impact of the edge detection method in the image analysis on the reading accuracy of the measured value. In order to ensure the automatic reading of the measured value by an analog meter, a standard webcam and the LabVIEW programme were applied. NI Vision Development tools were used. The Hough transform was used to detect the indicator. The programme output was compared during the application of several methods of edge detection. Those included: the Prewitt operator, the Roberts cross, the Sobel operator and the Canny edge detector. The image analysis was made for an analog meter indicator with the above-mentioned methods, and the results of that analysis were compared with each other and presented.

  1. The Impact of the Implementation of Edge Detection Methods on the Accuracy of Automatic Voltage Reading

    Directory of Open Access Journals (Sweden)

    Sidor Kamil

    2017-04-01

    Full Text Available The article presents the impact of the edge detection method in the image analysis on the reading accuracy of the measured value. In order to ensure the automatic reading of the measured value by an analog meter, a standard webcam and the LabVIEW programme were applied. NI Vision Development tools were used. The Hough transform was used to detect the indicator. The programme output was compared during the application of several methods of edge detection. Those included: the Prewitt operator, the Roberts cross, the Sobel operator and the Canny edge detector. The image analysis was made for an analog meter indicator with the above-mentioned methods, and the results of that analysis were compared with each other and presented.

  2. The accuracy of 15 - 25 years age estimation using panoramic radiograph with thevissen method in Indonesia

    Science.gov (United States)

    Hidayati, D. S.; Suryonegoro, H.; Makes, B. N.

    2017-08-01

    Age estimation is important for individual identification. Root development of third molars occurs at age 15-25 years. This study was conducted to determine the accuracy of age estimation using the Thevissen method in Indonesia. The Thevissen method was applied to 100 panoramic radiographs of both male and female subjects. Reliability was tested by the Dahlberg formula and Cohen’s Kappa test, and the significance measurement was tested by the paired t-test and the Wilcoxon test. The deviation of estimated age was then calculated. The deviation of age estimation was ±3.050 years and ±2.067 for male and female subjects, respectively. The deviation of age estimation of female subjects was less than male subject. Age estimation with the Thevissen method is preferred for age 15-22 years.

  3. Accuracy of the Water Vapour Content Measurements in the Atmosphere Using Optical Methods

    CERN Document Server

    Galkin, V D; Alekseeva, G A; Novikov, V V; Pakhomov, V P

    2010-01-01

    This paper describes the accuracy and the errors of water vapour content measurements in the atmosphere using optical methods, especially starphotometer. After the general explanations of the used expressions for the star-magnitude observations of the water vapour absorption in section 3 the absorption model for the water vapour band will be discussed. Sections 4 and 5 give an overview on the technique to determine the model parameters both from spectroscopic laboratory and radiosonde observation data. Finally, the sections 6 and 7 are dealing with the details of the errors; that means errors of observable magnitude, of instrumental extraterrestrial magnitude, of atmospheric extinction determination and of water vapour content determination by radiosonde humidity measurements. The main conclusion is: Because of the high precision of the results the optical methods for water vapour observation are suited to validate and calibrate alternative methods (GPS, LIDAR, MICROWAVE) which are making constant progress wo...

  4. Subglacial bedform orientation, one-dimensional size, and directional shape measurement method accuracy

    Science.gov (United States)

    Jorge, Marco G.; Brennand, Tracy A.

    2016-04-01

    This study is an assessment of previously reported automated methods and of a new method for measuring longitudinal subglacial bedform (LSB) morphometry. It evaluates the adequacy (accuracy and precision) of orientation, length and longitudinal asymmetry data derived from the longest straight line (LSL) enclosed by the LSB's footprint, the footprint's minimum bounding rectangle longitudinal axis (RLA) and the footprint's standard deviational ellipse (SDE) longitudinal axis (LA) (new method), and the adequacy of length based on an ellipse fitted to the area and perimeter of the footprint (elliptical length). Tests are based on 100 manually mapped drumlins and mega-scale glacial lineations representing the size and shape range of LSBs in the Puget Lowland drumlin field, WA, USA. Data from manually drawn LAs are used as reference for method evaluation. With the exception of elliptical length, errors decrease rapidly with increasing footprint elongation (decreasing potential angular divergence between LAs). For LSBs with elongation methods had very small mean absolute error (MAE) in all measures (e.g., MAE method should be avoided for orientation (36% of the errors were larger than 5°). 3) Elliptical length was the least accurate of all methods (MAE of 56.1 m and 15% of the errors larger than 5%); its use should be discontinued. 4) The relative adequacy of the LSL and RLA depends on footprint shape; SDE computed with the footprint's structural vertices is relatively shape-independent and is the preferred method. This study is significant also for negative-relief, and fluvial and aeolian bedforms.

  5. Photogrammetric determinaiton of coordinates and deformations analysis of the accuracy of this method

    Energy Technology Data Exchange (ETDEWEB)

    Korablev, D.P.; Fomichev, L.V.; Trunin, A.P.

    1979-01-01

    The photogrammetric method for determining coordinates and deformation, developed at the VNIMI, is based on the analytic determination of the coordinates of points of the sample from measurements of a single stereogram or photograph. The measurements are closely controlled. Calculations are done on a computer. In addition to calculating the point coordinates and various deformation values (vertical and horizontal displacements, slopes, deflections etc.), the accuracy of the results is evaluated: the standard deviation per unit mass, the rms errors of the adjusted values of photo orientation on real photographs and analytical models. The following conclusions and assumptions were obtained on the basis of these studies: 1. When finding the deformation of a flat object, if the points of the last deformation show practically no displacement in the direction normal to the flat surface, then the photos should be taken separately and processed by analytical transformational or by the parallax method, measured from stereograms with a ''time basis''. 2. Then using convergent photography, there is a significant increase in the accuracy of the coordinate determination in the direction perpendicular to the photographic reference line, while there is almost no change in accuracy along the two other axes when compared to normal photography. Optimal symmetric-convergent exposure has a convergence angle of 60 to 120 /sup 0/ and a 1.5 to 2 ratio of the photographic reference line to the average distance to the object (along a normal to the reference). The stereogram of the symmetric-convergent photography for 100% coverage of the photographs encompasses an area two to three times larger than the usual steoegram. 3. The distribution of the reference points should be considered optimal when they bound the working area of the photograph (stereogram). When photographing volumes, they bound the object in the plan and side views.

  6. Methods for estimating the evapotranspiration of reference for Santo Antônio do Leverger-MT

    Directory of Open Access Journals (Sweden)

    Alessandro Ferronato

    2016-08-01

    Full Text Available In order to compare different methods of calculating evapotranspiration reference (ET0 with the standard method of FAO Penman-Monteith 56, it was used daily weather in Weather Station Pe Ricardo Remetter, installed in Experimental Farm of the Federal University of Mato Grosso, for the seasons of 2006. The methods of Penman and Hargreaveas-Samani, proved adequate to estimate ET0 in all seasons of the year when compared to the standard method of Penman-Monteith-FAO 56. The method of Class A pan has not submitted satisfactory adjustments to estimate ET0 compared to the method of Penman-Monteith-FAO 56. Humid seasons (spring and summer was observed where the largest number of methods with performance "excellent", "very good" and "good" for the index "c" when compared with the drought seasons (autumn and winter .

  7. Reduced conductivity dependence method for increase of dipole localization accuracy in the EEG inverse problem.

    Science.gov (United States)

    Yitembe, Bertrand Russel; Crevecoeur, Guillaume; Van Keer, Roger; Dupre, Luc

    2011-05-01

    The EEG is a neurological diagnostic tool with high temporal resolution. However, when solving the EEG inverse problem, its localization accuracy is limited because of noise in measurements and available uncertainties of the conductivity value in the forward model evaluations. This paper proposes the reduced conductivity dependence (RCD) method for decreasing the localization error in EEG source analysis by limiting the propagation of the uncertain conductivity values to the solutions of the inverse problem. We redefine the traditional EEG cost function, and in contrast to previous approaches, we introduce a selection procedure of the EEG potentials. The selected potentials are, as low as possible, affected by the uncertainties of the conductivity when solving the inverse problem. We validate the methodology on the widely used three-shell spherical head model with a single electrical dipole and multiple dipoles as source model. The proposed RCD method enhances the source localization accuracy with a factor ranging between 2 and 4, dependent on the dipole location and the noise in measurements. © 2011 IEEE

  8. Evaluation of Different Estimation Methods for Accuracy and Precision in Biological Assay Validation.

    Science.gov (United States)

    Yu, Binbing; Yang, Harry

    2017-01-01

    Biological assays (bioassays) are procedures to estimate the potency of a substance by studying its effects on living organisms, tissues, and cells. Bioassays are essential tools for gaining insight into biologic systems and processes including, for example, the development of new drugs and monitoring environmental pollutants. Two of the most important parameters of bioassay performance are relative accuracy (bias) and precision. Although general strategies and formulas are provided in USP, a comprehensive understanding of the definitions of bias and precision remain elusive. Additionally, whether there is a beneficial use of data transformation in estimating intermediate precision remains unclear. Finally, there are various statistical estimation methods available that often pose a dilemma for the analyst who must choose the most appropriate method. To address these issues, we provide both a rigorous definition of bias and precision as well as three alternative methods for calculating relative standard deviation (RSD). All methods perform similarly when the RSD ≤10%. However, the USP estimates result in larger bias and root-mean-square error (RMSE) compared to the three proposed methods when the actual variation was large. Therefore, the USP method should not be used for routine analysis. For data with moderate skewness and deviation from normality, the estimates based on the original scale perform well. The original scale method is preferred, and the method based on log-transformation may be used for noticeably skewed data.LAY ABSTRACT: Biological assays, or bioassays, are essential in the development and manufacture of biopharmaceutical products for potency testing and quality monitoring. Two important parameters of assay performance are relative accuracy (bias) and precision. The definitions of bias and precision in USP 〈1033〉 are elusive and confusing. Another complicating issue is whether log-transformation should be used for calculating the

  9. Measurement of glomerular filtration rate in adults: accuracy of five single-sample plasma clearance methods

    DEFF Research Database (Denmark)

    Rehling, M; Rabøl, A

    1989-01-01

    After an intravenous injection of a tracer that is removed from the body solely by filtration in the kidneys, the glomerular filtration rate (GFR) can be determined from its plasma clearance. The method requires a great number of blood samples but collection of urine is not needed. In the present......-acetate) was determined simultaneously. Using these clearance values as reference the accuracy of six simplified methods were studied: five single-sample methods and one five-sample method. The standard error of estimate (SEE) of the single-sample methods ranged from 4.2 to 7.5 ml min-1 using EDTA, and from 3.8 to 6.3 ml...... min-1 using DTPA. SEE of the five-samples method was 3.0 ml min-1 (EDTA) and 3.1 ml min-1 (DTPA). The single-sample methods given by Christensen & Groth (1986) and by Tauxe (1986) are recommended for daily use, as SEE was small even at low GFR values. In patients with GFR less than 80 ml min-1...

  10. VIKOR method with enhanced accuracy for multiple criteria decision making in healthcare management.

    Science.gov (United States)

    Zeng, Qiang-Lin; Li, Dan-Dan; Yang, Yi-Bin

    2013-04-01

    Višekriterijumsko kompromisno rangiranje (VIKOR) method is one of the commonly used multi criteria decision making (MCDM) methods for improving the quality of decision making. VIKOR has an advantage in providing a ranking procedure for positive attributes and negative attributes when it is used and examined in decision support. However, we noticed that this method may failed to support an objective result in medical field because most medical data have normal reference ranges (e.g., for normally distributed data: NRR ∈ [μ ± 1.96σ], this limitation shows a negative effect on the acceptance of it as an effective decision supporting method in medical decision making. This paper proposes an improved VIKOR method with enhanced accuracy (ea-VIKOR) to make it suitable for such data in medical field by introducing a new data normalization method taking the original distance to the normal reference range (ODNRR) into account. In addition, an experimental example was presented to demonstrate efficiency and feasibility of the ea-VIKOR method, the results demonstrate the ability of ea-VIKOR to deal with moderate data and support the decision making in healthcare care management. For this reason, the ea-VIKOR should be considered for use as a decision support tool for future study.

  11. Welcome to Mt. Huangshan

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The newly developed tourist city of Huangshan possesses the extremely beautiful natural scenery of Mt. Huangshan, as well as the splendid culture of its 2,300-year history, providing marvelous enjoyment for both domestic and foreign tourists. The scenery in Mt. Huangshan is lovely all year round, with its "four unique views"—the rocks, the pine trees, the sea of clouds and the hot springs. In the areas around Mt. Huangshan, people can see folk customs as well as ancient architecture in the form of bridges, residences, arch towers, streets and ancestral halls.

  12. Dimensional accuracy of acrylic resin maxillary denture base polymerized by a new injection pressing method.

    Science.gov (United States)

    Ono, Takahiro; Kita, Seiichi; Nokubi, Takashi

    2004-09-01

    The purpose of this study was to confirm the dimensional accuracy of a newly developed injection pressing method for resin polymerization by making use of the internal electric resistance of resin to determine the optimal timing for resin injection. A new injection pressing polymerization pot with a built-in system to measure the internal electric resistance of resin was used for resin polymerization. Fluid-type resin was injected into the mold of a maxillary complete denture base under nine different conditions: three different timings for resin injection according to the electric resistance of resin dough (early stage: 11 Mohms; intermediate stage: 16 Mohms; final stage: 21 Mohms) and three different motor powers for resin injection (2000 N, 4000 N, and 6000 N). In the best polymerization condition (injected during the early stage of resin dough under a motor power of 6000 N), the adaptation of the denture base showed a statistically significant improvement compared with the conventional pouring method.

  13. Methods used by Elsam for monitoring precision and accuracy of analytical results

    Energy Technology Data Exchange (ETDEWEB)

    Hinnerskov Jensen, J. [Soenderjyllands Hoejspaendingsvaerk, Faelleskemikerne, Aabenraa (Denmark)

    1996-12-01

    Performing round robins at regular intervals is the primary method used by ELsam for monitoring precision and accuracy of analytical results. The firs round robin was started in 1974, and today 5 round robins are running. These are focused on: boiler water and steam, lubricating oils, coal, ion chromatography and dissolved gases in transformer oils. Besides the power plant laboratories in Elsam, the participants are power plant laboratories from the rest of Denmark, industrial and commercial laboratories in Denmark, and finally foreign laboratories. The calculated standard deviations or reproducibilities are compared with acceptable values. These values originate from ISO, ASTM and the like, or from own experiences. Besides providing the laboratories with a tool to check their momentary performance, the round robins are vary suitable for evaluating systematic developments on a long term basis. By splitting up the uncertainty according to methods, sample preparation/analysis, etc., knowledge can be extracted from the round robins for use in many other situations. (au)

  14. Confusion assessment method: a systematic review and meta-analysis of diagnostic accuracy

    Directory of Open Access Journals (Sweden)

    Shi Q

    2013-09-01

    Full Text Available Qiyun Shi,1,2 Laura Warren,3 Gustavo Saposnik,2 Joy C MacDermid1 1Health and Rehabilitation Sciences, Western University, London, Ontario, Canada; 2Stroke Outcomes Research Center, Department of Medicine, St Michael's Hospital, University of Toronto, Toronto, Ontario, Canada; 3Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada Background: Delirium is common in the early stages of hospitalization for a variety of acute and chronic diseases. Objectives: To evaluate the diagnostic accuracy of two delirium screening tools, the Confusion Assessment Method (CAM and the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU. Methods: We searched MEDLINE, EMBASE, and PsychInfo for relevant articles published in English up to March 2013. We compared two screening tools to Diagnostic and Statistical Manual of Mental Disorders IV criteria. Two reviewers independently assessed studies to determine their eligibility, validity, and quality. Sensitivity and specificity were calculated using a bivariate model. Results: Twenty-two studies (n = 2,442 patients met the inclusion criteria. All studies demonstrated that these two scales can be administered within ten minutes, by trained clinical or research staff. The pooled sensitivities and specificity for CAM were 82% (95% confidence interval [CI]: 69%–91% and 99% (95% CI: 87%–100%, and 81% (95% CI: 57%–93% and 98% (95% CI: 86%–100% for CAM-ICU, respectively. Conclusion: Both CAM and CAM-ICU are validated instruments for the diagnosis of delirium in a variety of medical settings. However, CAM and CAM-ICU both present higher specificity than sensitivity. Therefore, the use of these tools should not replace clinical judgment. Keywords: confusion assessment method, diagnostic accuracy, delirium, systematic review, meta-analysis

  15. High-accuracy infra-red thermography method using reflective marker arrays

    Science.gov (United States)

    Kirollos, Benjamin; Povey, Thomas

    2017-09-01

    In this paper, we describe a new method for high-accuracy infra-red (IR) thermography measurements in situations with significant spatial variation in reflected radiation from the surroundings, or significant spatial variation in surface emissivity due to viewing angle non-uniformity across the field of view. The method employs a reflective marker array (RMA) on the target surface—typically, high emissivity circular dots—and an integrated image analysis algorithm designed to require minimal human input. The new technique has two particular advantages which make it suited to high-accuracy measurements in demanding environments: (i) it allows the reflected radiation component to be calculated directly, in situ, and as a function of position, overcoming a key problem in measurement environments with non-uniform and unsteady stray radiation from the surroundings; (ii) using image analysis of the marker array (via apparent aspect ratio of the circular reflective markers), the local viewing angle of the target surface can be estimated, allowing corrections for angular variation of local emissivity to be performed without prior knowledge of the geometry. A third advantage of the technique is that allows for simple focus-stacking algorithms due to increased image entropy. The reflective marker array method is demonstrated for an isothermal, hemispherical object exposed to an external IR source arranged to give a significant non-uniform reflected radiation term. This is an example of a challenging environment, both because of the significant non-uniform reflected radiation term, and also the significant variation in target emissivity due to surface angle variation. We demonstrate that the new RMA IR technique leads to significantly lower error in evaluated surface temperature than conventional IR techniques. The method is applicable to any complex radiative environment.

  16. Accuracy assessment of blind and semi-blind restoration methods for hyperspectral images

    Science.gov (United States)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2016-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present or the original image, blind restoration methods must be considered. Otherwise, when a partial information is needed, semi-blind restoration methods can be considered. Numerous semi-blind and quite advanced methods are available in the literature. So to get better insights and feedback on the applicability and potential efficiency of a representative set of four semi-blind methods recently proposed, we have performed a comparative study of these methods in objective terms of blur filter and original image error estimation accuracy. In particular, we have paid special attention to the accurate recovering in the spectral dimension of original spectral signatures. We have analyzed peculiarities and factors restricting the applicability of these methods. Our tests are performed on a synthetic hyperspectral image, degraded with various synthetic blurs (out-of-focus, gaussian, motion) and with signal independent noise of typical levels such as those encountered in real hyperspectral images. This synthetic image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic reference spectral signatures to recover after synthetic degradation. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  17. Spatial distribution of soil heavy metal pollution estimated by different interpolation methods: accuracy and uncertainty analysis.

    Science.gov (United States)

    Xie, Yunfeng; Chen, Tong-bin; Lei, Mei; Yang, Jun; Guo, Qing-jun; Song, Bo; Zhou, Xiao-yong

    2011-01-01

    Mapping the spatial distribution of contaminants in soils is the basis of pollution evaluation and risk control. Interpolation methods are extensively applied in the mapping processes to estimate the heavy metal concentrations at unsampled sites. The performances of interpolation methods (inverse distance weighting, local polynomial, ordinary kriging and radial basis functions) were assessed and compared using the root mean square error for cross validation. The results indicated that all interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals. However, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods. The difference in contaminated area estimation among the four methods reached 6.14%. According to the interpolation results, the spatial uncertainty of polluted areas was mainly located in three types of region: (a) the local maxima concentration region surrounded by low concentration (clean) sites, (b) the local minima concentration region surrounded with highly polluted samples; and (c) the boundaries of the contaminated areas.

  18. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    Science.gov (United States)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  19. Accuracy of Nearly Extinct Cohort Methods for Estimating Very Elderly Subnational Populations

    Directory of Open Access Journals (Sweden)

    Wilma Terblanche

    2015-01-01

    Full Text Available Increasing very elderly populations (ages 85+ have potentially major implications for the cost of income support, aged care, and healthcare. The availability of accurate estimates for this population age group, not only at a national level but also at a state or regional scale, is vital for policy development, budgeting, and planning for services. At the highest ages census-based population estimates are well known to be problematic and previous studies have demonstrated that more accurate estimates can be obtained indirectly from death data. This paper assesses indirect estimation methods for estimating state-level very elderly populations from death counts. A method for incorporating internal migration is also proposed. The results confirm that the accuracy of official estimates deteriorates rapidly with increasing age from 95 and that the survivor ratio method can be successfully applied at subnational level and internal migration is minor. It is shown that the simpler alternative of applying the survivor ratio method at a national level and apportioning the estimates between the states produces very accurate estimates for most states and years. This is the recommended method. While the methods are applied at a state level in Australia, the principles are generic and are applicable to other subnational geographies.

  20. Toward an MT System without Pre-Editing Effects of New Methods in ALT-J/E

    CERN Document Server

    Ikehara, S; Yokoo, A; Nakaiwa, H; Ikehara, Satoru; Shirai, Satoshi; Yokoo, Akio; Nakaiwa, Hiromi

    1991-01-01

    Recently, several types of Japanese-to-English machine translation systems have been developed, but all of them require an initial process of rewriting the original text into easily translatable Japanese. Therefore these systems are unsuitable for translating information that needs to be speedily disseminated. To overcome this limitation, a Multi-Level Translation Method based on the Constructive Process Theory has been proposed. This paper describes the benefits of using this method in the Japanese-to-English machine translation system ALT-J/E. In comparison with conventional compositional methods, the Multi-Level Translation Method emphasizes the importance of the meaning contained in expression structures as a whole. It is shown to be capable of translating typical written Japanese based on the meaning of the text in its context, with comparative ease. We are now hopeful of carrying out useful machine translation with no manual pre-editing.

  1. Statistical downscaling of precipitation using local regression and high accuracy surface modeling method

    Science.gov (United States)

    Zhao, Na; Yue, Tianxiang; Zhou, Xun; Zhao, Mingwei; Liu, Yu; Du, Zhengping; Zhang, Lili

    2017-07-01

    Downscaling precipitation is required in local scale climate impact studies. In this paper, a statistical downscaling scheme was presented with a combination of geographically weighted regression (GWR) model and a recently developed method, high accuracy surface modeling method (HASM). This proposed method was compared with another downscaling method using the Coupled Model Intercomparison Project Phase 5 (CMIP5) database and ground-based data from 732 stations across China for the period 1976-2005. The residual which was produced by GWR was modified by comparing different interpolators including HASM, Kriging, inverse distance weighted method (IDW), and Spline. The spatial downscaling from 1° to 1-km grids for period 1976-2005 and future scenarios was achieved by using the proposed downscaling method. The prediction accuracy was assessed at two separate validation sites throughout China and Jiangxi Province on both annual and seasonal scales, with the root mean square error (RMSE), mean relative error (MRE), and mean absolute error (MAE). The results indicate that the developed model in this study outperforms the method that builds transfer function using the gauge values. There is a large improvement in the results when using a residual correction with meteorological station observations. In comparison with other three classical interpolators, HASM shows better performance in modifying the residual produced by local regression method. The success of the developed technique lies in the effective use of the datasets and the modification process of the residual by using HASM. The results from the future climate scenarios show that precipitation exhibits overall increasing trend from T1 (2011-2040) to T2 (2041-2070) and T2 to T3 (2071-2100) in RCP2.6, RCP4.5, and RCP8.5 emission scenarios. The most significant increase occurs in RCP8.5 from T2 to T3, while the lowest increase is found in RCP2.6 from T2 to T3, increased by 47.11 and 2.12 mm, respectively.

  2. Statistical downscaling of precipitation using local regression and high accuracy surface modeling method

    Science.gov (United States)

    Zhao, Na; Yue, Tianxiang; Zhou, Xun; Zhao, Mingwei; Liu, Yu; Du, Zhengping; Zhang, Lili

    2016-03-01

    Downscaling precipitation is required in local scale climate impact studies. In this paper, a statistical downscaling scheme was presented with a combination of geographically weighted regression (GWR) model and a recently developed method, high accuracy surface modeling method (HASM). This proposed method was compared with another downscaling method using the Coupled Model Intercomparison Project Phase 5 (CMIP5) database and ground-based data from 732 stations across China for the period 1976-2005. The residual which was produced by GWR was modified by comparing different interpolators including HASM, Kriging, inverse distance weighted method (IDW), and Spline. The spatial downscaling from 1° to 1-km grids for period 1976-2005 and future scenarios was achieved by using the proposed downscaling method. The prediction accuracy was assessed at two separate validation sites throughout China and Jiangxi Province on both annual and seasonal scales, with the root mean square error (RMSE), mean relative error (MRE), and mean absolute error (MAE). The results indicate that the developed model in this study outperforms the method that builds transfer function using the gauge values. There is a large improvement in the results when using a residual correction with meteorological station observations. In comparison with other three classical interpolators, HASM shows better performance in modifying the residual produced by local regression method. The success of the developed technique lies in the effective use of the datasets and the modification process of the residual by using HASM. The results from the future climate scenarios show that precipitation exhibits overall increasing trend from T1 (2011-2040) to T2 (2041-2070) and T2 to T3 (2071-2100) in RCP2.6, RCP4.5, and RCP8.5 emission scenarios. The most significant increase occurs in RCP8.5 from T2 to T3, while the lowest increase is found in RCP2.6 from T2 to T3, increased by 47.11 and 2.12 mm, respectively.

  3. Dimensional accuracy optimization of the micro-plastic injection molding process using the Taguchi design method

    Directory of Open Access Journals (Sweden)

    Chil-Chyuan KUO KUO

    2015-06-01

    Full Text Available Plastic injection molding is an important field in manufacturing industry because there are many plastic products that produced by injection molding. However, the time and cost required for producing a precision mold are the most troublesome problems that limit the application at the development stage of a new product in precision machinery industry. This study presents an approach of manufacturing a hard mold with microfeatures for micro-plastic injection molding. This study also focuses on Taguchi design method for investigating the effect of injection parameters on the dimensional accuracy of Fresnel lens during plastic injection molding. It was found that the dominant factor affecting the microgroove depth of Fresnel lens is packing pressure. The optimum processing parameters are packing pressure of 80 MPa, melt temperature of 240 °C, mold temperature of 90 °C and injection speed of 50 m/s. The dimensional accuracy of Fresnel lens can be controlled within ±3 µm using the optimum level of process parameters through the confirmation test. The research results of this study have industrial application values because electro-optical industries are able to significantly reduce a new optical element development cycle time.DOI: http://dx.doi.org/10.5755/j01.ms.21.2.5864

  4. Using Vicon system and optical method to evaluate inertial and magnetic system accuracy

    Directory of Open Access Journals (Sweden)

    Mansour F.

    2010-06-01

    Full Text Available MEMS are being more accessible thank price and energy consumption. That explains the democratisation of attitude control system. Sensors found in attitude control system are accelerometers, magnetometers and gyrometers. The commercialised solutions are expensive and not optimised in sensors number and energy consumption. Movea’s system is based on low cost sensors, number optimisation and low energy consumption. It is constituted of wireless attitude control system named MotionPOD. The system has two modes: the simulation mode and the reconstruction mode, even for cinematic motions. Movea’s system is seen as a black box which becomes as entries measurements for reconstruction mode and body segment orientation for simulation one. In body motion reconstruction Vicon system is usually used because of its accuracy. With markers position it is easy to compute the orientation of each body segment if we have put enough markers. That is why Vicon system may be a good etalon for the Movea’s system. We present practical approach of the characterisation of Movea’s system and its validation. Thereby we will present criterion useful to evaluate the reconstruction accuracy. Moreover we will insist on optical methods used to extract the interesting data from Vicon.

  5. Shape Optimization of the Turbomachine Channel by a Gradient Method -Accuracy Improvement

    Institute of Scientific and Technical Information of China (English)

    Marek Rabiega

    2003-01-01

    An algorithm of the gradient method of the channel shape optimization has been built on the basis of 3D equations of mass, momentum and energy conservation in the fluid flow. The gradient of the functional that is posed for minimization has been calculated by two methods, via sensitivities and - for comparison - by the finite difference approximation. The equations for sensitivities have been generated through a differentiate-then-discretize approach. The exemplary optimization of the blade shape of the centrifugal compressor wheel has been carried out for the inviscid gas flow governed by Euler equations with a non-uniform mass flow distribution as the inlet boundary condition. Mixing losses have been minimized downstream the outlet of the centrifugal wheel in this exemplary optimization. The results of the optimization problem accomplished by the two above-mentioned methods have been presented. In the case sparse grids have been used, the method with the gradient approximated by finite differences has been found to be more consistent. The discretization accuracy has turned out to be crucial for the consistency of the gradient method via sensitivities.

  6. METHOD OF ACHIEVING ACCURACY OF THERMO-MECHANICAL TREATMENT OF LOW-RIGIDITY SHAFTS

    Directory of Open Access Journals (Sweden)

    Antoni Świć

    2016-03-01

    Full Text Available The paper presents a method combining the processes of straightening and thermal treatment. Technological processes with axial strain were considered, for the case of heated material and without its heating. The essence of the process in the case of heated material consisted in the fact that if under tension all longitudinal forces in the first approximation are uniform - the same strains are generated. The presented technological approach, aimed at reducing the curvature of axial-symmetrical parts, is acceptable as the process of rough, preliminary machining, in the case of shafts with the ratio L/D≤100 (L – shaft length, d – shaft diameter and without a tendency of strengthening. To improve the accuracy and stability of geometric form of low-rigidity parts, a method was developed that combines the processes of straightening and heat treatment. The method consists in that axial strain – tension, is applied to the shaft during heating, and during cooling the product is fixed in a fixture, the cooling rate of the shaft being several-fold greater than that of the fixture. A device is presented for the realisation of the method of controlling the process of plastic deformation of low-rigidity shafts. In the case of the presented device and the adopted calculation scheme, a method was developed that permits the determination of the length of shaft section and of the time of its cooling.

  7. Morphometric measurements of dragonfly wings: the accuracy of pinned, scanned and detached measurement methods

    Directory of Open Access Journals (Sweden)

    Laura Johnson

    2013-03-01

    Full Text Available Large-scale digitization of museum specimens, particularly of insect collections, is becoming commonplace. Imaging increases the accessibility of collections and decreases the need to handle individual, often fragile, specimens. Another potential advantage of digitization is to make it easier to conduct morphometric analyses, but the accuracy of such methods needs to be tested. Here we compare morphometric measurements of scanned images of dragonfly wings to those obtained using other, more traditional, methods. We assume that the destructive method of removing and slide-mounting wings provides the most accurate method of measurement because it eliminates error due to wing curvature. We show that, for dragonfly wings, hand measurements of pinned specimens and digital measurements of scanned images are equally accurate relative to slide-mounted hand measurements. Since destructive slide-mounting is unsuitable for museum collections, and there is a risk of damage when hand measuring fragile pinned specimens, we suggest that the use of scanned images may also be an appropriate method to collect morphometric data from other collected insect species.

  8. Rapid radiation in spiny lobsters (Palinurus spp as revealed by classic and ABC methods using mtDNA and microsatellite data

    Directory of Open Access Journals (Sweden)

    Macpherson Enrique

    2009-11-01

    Full Text Available Abstract Background Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene and/or more recent (Pleistocene oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae. In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our

  9. ADFE METHOD WITH HIGH ACCURACY FOR NONLINEAR PARABOLIC INTEGRO-DIFFERENTIAL SYSTEM WITH NONLINEAR BOUNDARY CONDITIONS

    Institute of Scientific and Technical Information of China (English)

    崔霞

    2002-01-01

    Alternating direction finite element (ADFE) scheme for d-dimensional nonlinear system of parabolic integro-differential equations is studied. By using a local approximation based on patches of finite elements to treat the capacity term qi(u), decomposition of the coefficient matrix is realized; by using alternating direction, the multi-dimensional problem is reduced to a family of single space variable problems, calculation work is simplified; by using finite element method, high accuracy for space variant is kept; by using inductive hypothesis reasoning, the difficulty coming from the nonlinearity of the coefficients and boundary conditions is treated; by introducing Ritz-Volterra projection, the difficulty coming from the memory term is solved. Finally, by using various techniques for priori estimate for differential equations, the unique resolvability and convergence properties for both FE and ADFE schemes are rigorously demonstrated, and optimal H1 and L2norm space estimates and O((△t)2) estimate for time variant are obtained.

  10. How could the replica method improve accuracy of performance assessment of channel coding?

    Science.gov (United States)

    Kabashima, Yoshiyuki

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  11. The modified equation approach to the stability and accuracy analysis of finite-difference methods

    Science.gov (United States)

    Warming, R. F.; Hyett, B. J.

    1974-01-01

    The stability and accuracy of finite-difference approximations to simple linear partial differential equations are analyzed by studying the modified partial differential equation. Aside from round-off error, the modified equation represents the actual partial differential equation solved when a numerical solution is computed using a finite-difference equation. The modified equation is derived by first expanding each term of a difference scheme in a Taylor series and then eliminating time derivatives higher than first order by certain algebraic manipulations. The connection between 'heuristic' stability theory based on the modified equation approach and the von Neumann (Fourier) method is established. In addition to the determination of necessary and sufficient conditions for computational stability, a truncated version of the modified equation can be used to gain insight into the nature of both dissipative and dispersive errors.

  12. An investigation of the accuracy of finite difference methods in the solution of linear elasticity problems

    Science.gov (United States)

    Bauld, N. R., Jr.; Goree, J. G.

    1983-01-01

    The accuracy of the finite difference method in the solution of linear elasticity problems that involve either a stress discontinuity or a stress singularity is considered. Solutions to three elasticity problems are discussed in detail: a semi-infinite plane subjected to a uniform load over a portion of its boundary; a bimetallic plate under uniform tensile stress; and a long, midplane symmetric, fiber reinforced laminate subjected to uniform axial strain. Finite difference solutions to the three problems are compared with finite element solutions to corresponding problems. For the first problem a comparison with the exact solution is also made. The finite difference formulations for the three problems are based on second order finite difference formulas that provide for variable spacings in two perpendicular directions. Forward and backward difference formulas are used near boundaries where their use eliminates the need for fictitious grid points.

  13. Advantages of multigrid methods for certifying the accuracy of PDE modeling

    Science.gov (United States)

    Forester, C. K.

    1981-01-01

    Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.

  14. An indirect accuracy calibration and uncertainty evaluation method for large scale inner dimensional measurement system

    Science.gov (United States)

    Liu, Bai-Ling; Qu, Xing-Hua

    2013-10-01

    In view of present problem of low accuracy, limited range and low automaticity existing in the large-scale diameter inspection instrument, a precise measuring system (robot) was designed based on laser displacement sensor for large-scale inner diameter in this paper. Since the traditional measuring tool of the robot is expensive and hard to manufacture, an indirect calibration method is proposed. In this study, the system eccentric error is calibrated by ring gauge of laboratory. An experiment, which changes the installed order of located rods to introduce located rods' eccentric error, is designed to test whether the spindle eccentric error remains unchanged. The experiment result shows the variation of spindle's eccentricity after changing rods is within 0.02mm. Due to the spindle is an unchanged part of robot, based on Φ584 series robot calibrated by ring gauge, other series robot can be deduced combining with the length of extended arm.

  15. Accuracy improvement of T-history method for measuring heat of fusion of various materials

    Energy Technology Data Exchange (ETDEWEB)

    Hiki Hong [KyungHee University (Korea). School of Mechanical and Industrial Systems Engineering; Sun Kuk Kim [KyungHee University (Korea). School of Architecture and Civil Engineering; Yong-Shik Kim [University of Incheon (Korea). Dept. of Architectural Engineering

    2004-06-01

    T-history method, developed for measuring heat-of-fusion of phase change material (PCM) in sealed tubes, has the advantages of a simple experimental device and convenience with no sampling process. However, some improper assumptions in the original method, such as using a degree of supercooling as the end of latent heat period and neglecting sensible heat during phase change, can cause significant errors in determining the heat of fusion. We have improved this problem in order to predict better results. The present study shows that the modified T-history method is successfully applied to a variety of PCMs such as paraffin and lauric acid having no or a low degree of supercooling. Also it turned out that selected periods for sensible and latent heat do not significantly affect the accuracy of heat- of-fusion. As a result, the method can provide an appropriate means to assess a newly developed PCM by a cycle test even if a very accurate value cannot be obtained. (author)

  16. Comparison of the accuracy of clinical methods for estimation of fetal weight

    Directory of Open Access Journals (Sweden)

    Haji Esmaeilou M

    2016-01-01

    Full Text Available Estimation of fetal weight (EFW is one of the essential measures for labor and delivery. While the use of ultrasonography for EFW is costly, may not work in some centers. So, to make a non- ultrasound method for EFW is important. This study aimed to compare the accuracy of various clinical methods for EFW with the actual birth weight at term pregnancy. A cross-sectional study conducted on 98 pregnant women who were admitted for delivery with gestational age of 37-42 weeks, singleton pregnancy and cephalic presentation in Imam Khomeini hospital, Ahvaz. The fetal weight was anticipated by a midwife through abdominal palpation (using Leopold's maneuvers, measurements of symphysis-fundal height and abdominal girth, and also three formulas. The results were analyzed using the software SPSS version 20 and SATA11. The actual average birth weight was3242.85±43.37 gr. A significant positive correlation was observed between actual birth weight and clinically estimated weight. Kappa coefficient was > 0.8 when all studied methods were compared with the actual weight. This agreement was greater on abdominal palpation and Risanto’s formula, respectively. In the present study, abdominal palpation and Risanto's formula are more accurate for predicting fetal weight. Since these methods are quick, simple and costeffective for EFW so, they can be a useful alternative instead of routine ultrasonography.

  17. Accuracy of the FAMACHA© method in ewes fed different levels of crude protein

    Directory of Open Access Journals (Sweden)

    Francisco de Assis Fonseca de Macedo

    2014-05-01

    Full Text Available The accuracy of the FAMACHA© method was evaluated on the identification of female sheep fed two levels of crude protein, naturally infected with Haemonchus contortus, by means of the correspondent hematocrit value. Forty-seven female sheep of the breeds Santa Inês (n = 16, Texel (n = 16 and Ile de France (n = 15 aged between eight and twelve months were assigned to two treatments, 12 or 16 % crude protein in the diet. All the animals were wormed thirty days before the first data collections, which were done fortnightly between July 2005 and March 2006. The color of the ocular conjunctiva was individually evaluated according to the precepts of the FAMACHA© method and the hematocrit value of each animal was obtained in laboratory. A correlation of 1:0.7991 was found between the hematocrit values and the classification given by the FAMACHA© method aiming to identify animals with different degrees of anemia. The method was efficient to identify animals to worm, thus representing a support in the identification of animals susceptible to Haemonchus contortus.

  18. [Accuracy of the oscillometric method to measure blood pressure in children

    Science.gov (United States)

    Rego Filho, E A; Mello, S F; Silva, C R; Vituri, D W; Bazoni, E; Gordan, L N

    1999-01-01

    OBJECTIVE: The aim of this study is to analyze the substitution of the standard auscultatory method by the oscillometric blood pressure monitor, independently of the validity of the intraarterial blood pressure measurement. The accuracy of the automatic oscillometric monitor was compared to the auscultatory mercury manometer blood pressure measurement in apparently healthy school age children. METHODS: A device able to perform 3 simultaneous readings are used: one reading by the monitor and the others by two "blind" observers. We studied 72 school age children with the following characteristics: mean age 9.5 (6.1-16.1) and 39 males (54.2%). RESULTS: The difference for the systolic and diastolic blood pressure obtained by the monitor was in average + 6.2 mmHg and + 10.0 mmHg, respectively, when compared to the observer's readings. There was neither a good correlation nor a good agreement between the two observers and the monitor in the blood pressure determination. CONCLUSIONS: We concluded that the substitution of the standard auscultatory method for the non-invasive oscillometric method to measure blood pressure in school age children can not be generally recommended.

  19. Improving accuracy in the MPM method using a null space filter

    Science.gov (United States)

    Gritton, Chris; Berzins, Martin

    2017-01-01

    The material point method (MPM) has been very successful in providing solutions to many challenging problems involving large deformations. Nevertheless there are some important issues that remain to be resolved with regard to its analysis. One key challenge applies to both MPM and particle-in-cell (PIC) methods and arises from the difference between the number of particles and the number of the nodal grid points to which the particles are mapped. This difference between the number of particles and the number of grid points gives rise to a non-trivial null space of the linear operator that maps particle values onto nodal grid point values. In other words, there are non-zero particle values that when mapped to the grid point nodes result in a zero value there. Moreover, when the nodal values at the grid points are mapped back to particles, part of those particle values may be in that same null space. Given positive mapping weights from particles to nodes such null space values are oscillatory in nature. While this problem has been observed almost since the beginning of PIC methods there are still elements of it that are problematical today as well as methods that transcend it. The null space may be viewed as being connected to the ringing instability identified by Brackbill for PIC methods. It will be shown that it is possible to remove these null space values from the solution using a null space filter. This filter improves the accuracy of the MPM methods using an approach that is based upon a local singular value decomposition (SVD) calculation. This local SVD approach is compared against the global SVD approach previously considered by the authors and to a recent MPM method by Zhang and colleagues.

  20. HIGH ACCURACY FINITE VOLUME ELEMENT METHOD FOR TWO-POINT BOUNDARY VALUE PROBLEM OF SECOND ORDER ORDINARY DIFFERENTIAL EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    王同科

    2002-01-01

    In this paper, a high accuracy finite volume element method is presented for two-point boundary value problem of second order ordinary differential equation, which differs fromthe high order generalized difference methods. It is proved that the method has optimal order er-ror estimate O(h3) in H1 norm. Finally, two examples show that the method is effective.

  1. Developing and testing a low cost method for high resolution measurements of volcanic water vapour emissions at Vulcano and Mt. Etna

    Science.gov (United States)

    Pering, Tom D.; McGonigle, Andrew J. S.; Tamburello, Giancarlo; Aiuppa, Alessandro; Bitetto, Marcello; Rubino, Cosimo

    2015-04-01

    The most voluminous of emissions from volcanoes are from water vapour (H2O) (Carroll and Holloway, 1994), however, measurements of this species receive little focus due to the difficulty of independent measurement, largely a result of high atmospheric background concentrations which often undergo rapid fluctuations. A feasible method of measuring H2O emissions at high temporal and spatial resolutions would therefore be highly valuable. We describe a new and low-cost method combining modified web cameras (i.e. with infrared filters removed) with measurements of temperature and relative humidity to produce high resolution measurements (≈ 0.25 Hz) of H2O emissions. The cameras are affixed with near-infrared filters at points where water vapour absorbs (940 nm) and doesn't absorb (850 nm) incident light. Absorption of H2O is then determined by using Lambert-Beer's law on a pixel by pixel basis, producing a high spatial resolution image. The system is then calibrated by placing a Multi-GAS unit within the gas source and camera field-of-view, which measures; SO2, CO2, H2S and relative humidity. By combining the point measurements of the Multi-GAS unit with pixel values for absorption, first correcting for the width of the gas source (generally a Gaussian distribution), a calibration curve is produced which allows the conversion of absorption values to mass of water within a pixel. In combination with relative humidity measurements made outside of the plume it is then possible to subtract the non-volcanic background H2O concentration to produce a high resolution calibrated volcanic H2O flux. This technique is demonstrated in detail at the active fumarolic system on Vulcano (Aeolian Islands, Italy). Data processing and image acquisition was completed in Matlab® using a purpose built code. The technique is also demonstrated for the plume of the North-East Crater of Mt. Etna (Sicily, Italy). Here, contemporaneously acquired measurements of SO2 using a UV camera, combined

  2. Analysis of Robot Accuracy Assessed via its Joint Clearance by Virtual Prototyping Method

    Institute of Scientific and Technical Information of China (English)

    GUO Ling; WANG Li-hong; ZHANG Yu-ru

    2004-01-01

    This paper discusses how joint clearance influences robot end effector positioning accuracy and a robot accuracy analysis approach based on a virtual prototype is proposed. First, a 5-DOF( Degree of freedom) rneurosurgery robot was introduced. Then we built its virtual prototype, made movement planning and measured the manipulator tip accuracy, through which this robot accuracy portrait was obtained. Finally,in order to validate the robot accuracy analysis approach which is based on a virtual prototype, the result was compared with that from a model built by robot forward kinematics and robot differential kinematics.The robot accuracy analysis approach presented in this paper gives a new way to enhance robot design quality, and help to optimize the control and programming of the robot.

  3. Accuracy of Conventional Diagnostic Methods for Identifying Structural Changes in Patients with Focal Epilepsy

    Science.gov (United States)

    Dakaj, Nazim; Kruja, Jera; Jashari, Fisnik; Boshnjaku, Dren; Shatri, Nexhat; Zeqiraj, Kamber

    2016-01-01

    Background: Epilepsy is a neurological disorder characterized by abnormal firing of nerve impulses in the brain. Aim: This study aims to investigate the frequency of appearance of pathological changes in conventional examination methods (electroencephalography–EEG, brain computerized tomography -CT or brain magnetic resonance imaging – MRI) in patients with epilepsy, and relationship between clinical manifestations and localization of changes in CT or MRI. Methods: In this study we have included 110 patients with focal epilepsy who fulfilled the inclusion criteria out of 557 initially diagnosed patients. Detailed clinical examination together with brain imaging (CT and MRI) and electroencephalography examination was performed. We have evaluated the accuracy of each diagnostic method to localize the epileptic focus. Diagnosis of epilepsy was determined by the ILAE (International League Against Epilepsy) criteria of the year 1989, and classification of epileptic seizures was made according to the ILAE classification 2010. Results: Electroencephalography presented changes in 60.9% of patients; brain CT in 42.1%, and MRI in 78% of the patients. The results of our study showed that clinical manifestations were not always conveyed with pathological changes in conventional examining methods performed. Of the total of 79 patients with changes in imaging (8 with changes in CT and 71 in MRI), 79.7% presented a clinical picture compatible with the region in which morphological changes were found, while in 20.3% of patients the presented morphological changes were not aligned with the clinical picture. Conclusion: In patients with epilepsy, conventional examination methods do not always find pathological changes, while clinical manifestations of epilepsy did not always coincide with the location of changes in imaging. Further studies are needed to see if there is clear border between focal and generalized epilepsy. PMID:28077892

  4. Effect of PEG additive on anode microstructure and cell performance of anode-supported MT-SOFCs fabricated by phase inversion method

    Science.gov (United States)

    Ren, Cong; Liu, Tong; Maturavongsadit, Panita; Luckanagul, Jittima Amie; Chen, Fanglin

    2015-04-01

    Anode-supported micro-tubular solid oxide fuel cells (MT-SOFCs) have been fabricated by phase inversion method. For the anode support preparation, N-methyl-2-pyrrolidone (NMP), polyethersulfone (PESf) and poly ethylene glycol (PEG) were applied as solvent, polymer binder and additive, respectively. The effect of molecular weight and amount of PEG additive on the thermodynamics of the casting solutions was characterized by measuring the coagulation value. Viscosity of the casting slurries was also measured and the influence of PEG additive on viscosity was studied and discussed. The presence of PEG in the casting slurry can significantly influence the final anode support microstructure. Based on the microstructure result and the measured gas permeation value, two anode supports were selected for cell fabrication. For cell with the anode support fabricated using slurry with PEG additive, a maximum cell power density of 704 mW cm-2 is obtained at 750 °C with humidified hydrogen as fuel and ambient air as oxidant; cell fabricated without any PEG additive shows the peak cell power density of 331 mW cm-2. The relationship between anode microstructure and cell performance was discussed.

  5. Application of unconventional geoelectrical methods to the hydrogeological examination of the Mt. S. Croce rock formations (Umbria, Italy involved in a railway tunnel project

    Directory of Open Access Journals (Sweden)

    D. Patella

    1994-06-01

    Full Text Available he project of doubling and developing of the railway line Orte-Falconara, committed by the Italian State Railway Company to the COMAVI Consortium (Rome, Italy, envisaged building the Mt. S. Croce tunnel, about 3200 m long between the stations of Narni and Nera Montoro (Umbria, ltaly. During the last phase of the feasibility project, a geophysical research based on geoelectrical prospecting methods was carried out to complement other geognostic investigations with the following goals: a to outline the complex geotectonic model of the rock system, which will be affected by the new railway layout; b to gain information on the hydrogeologic features of the survey area, in relation to the existing geologic situation and the consequent effects on the digging conditions of the tunnel and on the operation conditions of the railway layout. The geophysical work was thus organized according to the following scheme: a execution of dipole electrical sounding pro-files, to depict a series of significant tomographic pseudosections, both across and along the new railway layout; b execution of self-potential measurements, to draw an anomaly map over the whole hydrogeological network system in the survey area. The research provided information which has helped to improve the geological-structural model of the area and disclosed the hydrogelogic network, conforming to the classified field surface manifestations. At present, further detailed field investigations are being carried out, which confirm all the results obtained by the geoelectrical survey.

  6. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    Science.gov (United States)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  7. The accuracy of a method for printing three-dimensional spinal models.

    Directory of Open Access Journals (Sweden)

    Ai-Min Wu

    Full Text Available To study the morphology of the human spine and new spinal fixation methods, scientists require cadaveric specimens, which are dependent on donation. However, in most countries, the number of people willing to donate their body is low. A 3D printed model could be an alternative method for morphology research, but the accuracy of the morphology of a 3D printed model has not been determined.Forty-five computed tomography (CT scans of cervical, thoracic and lumbar spines were obtained, and 44 parameters of the cervical spine, 120 parameters of the thoracic spine, and 50 parameters of the lumbar spine were measured. The CT scan data in DICOM format were imported into Mimics software v10.01 for 3D reconstruction, and the data were saved in .STL format and imported to Cura software. After a 3D digital model was formed, it was saved in Gcode format and exported to a 3D printer for printing. After the 3D printed models were obtained, the above-referenced parameters were measured again.Paired t-tests were used to determine the significance, set to P0.800. The other ICC values were 0.600; none were <0.600.In this study, we provide a protocol for printing accurate 3D spinal models for surgeons and researchers. The resulting 3D printed model is inexpensive and easily obtained for spinal fixation research.

  8. Multispectral image compression methods for improvement of both colorimetric and spectral accuracy

    Science.gov (United States)

    Liang, Wei; Zeng, Ping; Xiao, Zhaolin; Xie, Kun

    2016-07-01

    We propose that both colorimetric and spectral distortion in compressed multispectral images can be reduced by a composite model, named OLCP(W)-X (OptimalLeaders_Color clustering-PCA-W weighted-X coding). In the model, first the spectral-colorimetric clustering is designed for sparse equivalent representation by generating spatial basis. Principal component analysis (PCA) is subsequently used in the manipulation of spatial basis for spectral redundancy removal. Then error compensation mechanism is presented to produce predicted difference image, and finally combined with visual characteristic matrix W, and the created image is compressed by traditional multispectral image coding schemes. We introduce four model-based algorithms to explain their validity. The first two algorithms are OLCPWKWS (OLC-PCA-W-KLT-WT-SPIHT) and OLCPKWS, in which Karhunen-Loeve transform, wavelet transform, and set partitioning in hierarchical trees coding are applied for the created image compression. And the latter two methods are OLCPW-JPEG2000-MCT and OLCP-JPEG2000-MCT. Experimental results show that, compared with the corresponding traditional coding, the proposed OLCPW-X schemes can significantly improve the colorimetric accuracy of rebuilding images under various illumination conditions and generally achieve satisfactory peak signal-to-noise ratio under the same compression ratio. And OLCP-X methods could always ensure superior spectrum reconstruction. Furthermore, our model has excellent performance on user interaction.

  9. Discretization Techniques of Height Function Method for Greater Increased Accuracy of Mass Conservation

    Directory of Open Access Journals (Sweden)

    Boonchai LERTNUWAT

    2014-01-01

    Full Text Available A height function method has been used to solve the shape of free surfaces in incompressible viscous flows for hydrodynamics. Three proposed discretization techniques for the height function method are developed with particular attention to the law of mass conservation. The concept of the proposed techniques is to place a control volume on the most appropriate location in any staggered grid system. First, the proposed techniques and the conventional technique are verified with a simple problem whose exact solution is known. Then, all numerical techniques are examined with a more complicated problem to investigate their accuracy. The simulated results of the proposed techniques are compared to those of conventional technique. Finally, it is concluded that (1 the proposed techniques will give better results than the conventional technique if the grid resolution is sufficiently fine, (2 the first proposed technique gives poorer results than the other proposed techniques, and (3 the second proposed technique gives better results than the third proposed technique, but the third proposed technique is easier to apply due to its explicit form of the equation.

  10. Efficacy and pharmacokinetics of OZ78 and MT04 against a natural infection with Fasciola hepatica in sheep.

    Science.gov (United States)

    Meister, Isabel; Duthaler, Urs; Huwyler, Jörg; Rinaldi, Laura; Bosco, Antonio; Cringoli, Giuseppe; Keiser, Jennifer

    2013-11-15

    Fasciolosis is a parasitosis caused by the food-borne trematode Fasciola spp. of major veterinary significance. Triclabendazole is the first line treatment in humans and animals but cases of resistance are spreading worldwide. The synthetic peroxides OZ78 and MT04 are lead compounds for the treatment of fasciolosis. In the present study we investigated the efficacy and drug disposition following a single intramuscular dose of 100 mg/kg OZ78 and MT04 in sheep harbouring a natural Fasciola hepatica infection. A liquid chromatography and tandem mass spectrometry (LC-MS/MS) method was developed and validated to quantify plasma and bile concentrations of both compounds. Plasma samples were analysed with an accuracy for OZ78 and MT04 from 91 to 115% and a precision lower than 8.9%. Bile samples displayed an accuracy between 92 and 101% and a precision up to 12.7%. Bile samples were collected at 0 and 6h post-administration. Plasma mean peak concentration was 11.1 μg/ml at 1.5 h for OZ78 and 4.8 μg/ml at 4.2 h for MT04. Mean AUC of OZ78 and MT04 was 6698 and 3567 μg min/ml, respectively. Bile concentration at 6h post-treatment was 1.0 μg/ml for OZ78 and 1.4 μg/ml for MT04. Treatment with OZ78 showed no effect on egg burden and adult worm counts in vivo, whereas MT04 displayed a significant egg count reduction of 98.5% and a worm burden reduction of 92%. In conclusion, our study reveals an excellent activity of MT04 against F. hepatica in naturally infected sheep and a first insight into its PK behaviour.

  11. FROM ENERGY IMPROVEMENT TO ACCURACY ENHANCEMENT:IMPROVEMENT OF PLATE BENDING ELEMENTS BY THE COMBINED HYBRID METHOD

    Institute of Scientific and Technical Information of China (English)

    Xiao-ping Xie

    2004-01-01

    By following the geometric point of view in mechanics, a novel expression of the combined hybrid method for plate bending problems is introduced to clarify its intrinsic mechanism of enhancing coarse-mesh accuracy of conforming or nonconforming plate elements.By adjusting the combination parameter α∈ (0, 1) and adopting appropriate bending moments modes, reduction of energy error for the discretized displacement model leads to enhanced numerical accuracy. As an application, improvement of Adini's rectangle is discussed. Numerical experiments show that the combined hybrid counterpart of Adini's element is capable of attaining high accuracy at coarse meshes.

  12. Application of a solvable model to test the accuracy of the time-dependent Hartree-Fock method

    Energy Technology Data Exchange (ETDEWEB)

    Bouayad, N. [Blida Univ. (Algeria). Inst. de Phys.; Zettili, N. [Blida Univ. (Algeria). Inst. de Phys.]|[Department of Physics, King Fahd University of Petroleum and Minerals, Dhahran 31261 (Saudi Arabia)

    1996-11-11

    This work deals with the application of a solvable model to study the accuracy of a nuclear many-body approximation method. Using a new exactly solvable model, we carry out here a quantitative test of the accuracy of the time-dependent Hartree-Fock (TDHF) method. The application of the TDHF method to the model reveals that the model is suitable for describing various forms of collective motion: harmonic and anharmonic oscillations as well as rotations. We then show that, by properly quantizing the TDHF results, the TDHF approximation method yields energy spectra that are in very good agreement with their exact counterparts. This work shows that the model offers a rich and comprehensive framework for studying the various aspects of the TDHF method and also for assessing quantitatively its accuracy. (orig.).

  13. Application of a solvable model to test the accuracy of the time-dependent Hartree-Fock method

    Science.gov (United States)

    Bouayad, Nouredine; Zettili, Nouredine

    1996-02-01

    This work deals with the application of a solvable model to study the accuracy of a nuclear many-body approximation method. Using a new exactly solvable model, we carry out here a quantitative test of the accuracy of the time-dependent Hartree-Fock (TDHF) method. The application of the TDHF method to the model reveals that the model is suitable for describing various forms of collective motion: harmonic and anharmonic oscillations as well as rotations. We then show that, by properly quantizing the TDHF results, the TDHF approximation method yields energy spectra that are in very good agreement with their exact counterparts. This work shows that the model offers a rich and comprehensive framework for studying the various aspects of the TDHF method and also for assessing quantitatively its accuracy.

  14. A new method for the accuracy evaluation of a manufactured piece

    Science.gov (United States)

    Oniga, E. V.; Cardei, M.

    2015-11-01

    To evaluate the accuracy of a manufactured piece, it must be measured and compared with a reference model, namely the designed 3D model, based on geometrical elements. In this paper a new method for the precision evaluation of a manufactured piece is proposed, which implies the creation of the piece digital 3D model based on digital images and its transformation into a 3D mesh surface. The differences between the two models, the designed model and the new created one, are calculated using the Hausdorff distance. The aim of this research is to determine the differences between two 3D models, especially CAD models, with high precision, in a completely automated way. To obtain the results, a small piece has been photographed with a digital camera, that was calibrated using a 3D calibration object, a target consisting of a number of 42 points, 36 placed in the corners of 9 wood cubes with different heights and 6 of them placed at the middle of the distance between the cubes, on a board. This target was previously tested, the tests showing that using this calibration target instead of a 2D calibration grid, the precision of the final 3D model is improved with approximatly 50%. The 3D model of the manufactured piece was created using two methods. First, based on digital images, a point cloud was automatically generated and after the filtering process, the remaining points were interpolated, obtaining the piece 3D model as a mesh surface. Second, the piece 3D model was created using also the digital images, based on its characteristic points, resulting a CAD model, that was transformed into a mesh surface. Finally, the two 3D models were compared with the designed model, using the CloudCompare software, thus resulting the imperfections of the manufactured piece. The proposed method highlights the differences between the two models using a color palette, offering at the same time a global comparison.

  15. The Influence of Internal Structures in Fused Deposition Modeling Method on Dimensional Accuracy of Components

    Directory of Open Access Journals (Sweden)

    Milde Ján

    2016-09-01

    Full Text Available The paper investigates the influence of infill (internal structures of components in the Fused Deposition Modeling (FDM method on dimensional and geometrical accuracy of components. The components in this case were real models of human mandible, which were obtained by Computed Tomography (CT mostly used in medical applications. In the production phase, the device used for manufacturing, was a 3D printer Zortrax M200 based on the FDM technology. In the second phase, the mandibles made by the printer, were digitized using optical scanning device of GOM ATOS Triple Scan II. They were subsequently evaluated in the final phase. The practical part of this article describes the procedure of jaw model modification, the production of components using a 3D printer, the procedure of digitization of printed parts by optical scanning device and the procedure of comparison. The outcome of this article is a comparative analysis of individual printed parts, containing tables with mean deviations for individual printed parts, as well as tables for groups of printed parts with the same infill parameter.

  16. Accuracy of the lattice-Boltzmann method using the Cell processor

    Science.gov (United States)

    Harvey, M. J.; de Fabritiis, G.; Giupponi, G.

    2008-11-01

    Accelerator processors like the new Cell processor are extending the traditional platforms for scientific computation, allowing orders of magnitude more floating-point operations per second (flops) compared to standard central processing units. However, they currently lack double-precision support and support for some IEEE 754 capabilities. In this work, we develop a lattice-Boltzmann (LB) code to run on the Cell processor and test the accuracy of this lattice method on this platform. We run tests for different flow topologies, boundary conditions, and Reynolds numbers in the range Re=6 350 . In one case, simulation results show a reduced mass and momentum conservation compared to an equivalent double-precision LB implementation. All other cases demonstrate the utility of the Cell processor for fluid dynamics simulations. Benchmarks on two Cell-based platforms are performed, the Sony Playstation3 and the QS20/QS21 IBM blade, obtaining a speed-up factor of 7 and 21, respectively, compared to the original PC version of the code, and a conservative sustained performance of 28 gigaflops per single Cell processor. Our results suggest that choice of IEEE 754 rounding mode is possibly as important as double-precision support for this specific scientific application.

  17. Accuracy analysis of the Null-Screen method for the evaluation of flat heliostats

    Science.gov (United States)

    Cebrian-Xochihuila, P.; Huerta-Carranza, O.; Díaz-Uribe, R.

    2016-04-01

    In this work we develop an algorithm to determinate the accuracy of the Null-Screen Method, used for the testing of flat heliostats used as solar concentrators in a central tower configuration. We simulate the image obtained on a CCD camera when an orderly distribution of points are displayed on a Null-Screen perpendicular to the heliostat under test. The deformations present in the heliostat are represented as a cosine function of the position with different periods and amplitudes. As a resolution criterion, a deformation on the mirror can be detected when the differences in position between the spots on the image plane for the deformed surface as compared with those obtained for an ideally flat heliostat are equal to one pixel. For 6.4μm pixel size and 18mm focal length, the minimum deformation we can measure in the heliostat, correspond to amplitude equal a 122μm for a period equal to 1m; this is equivalent to 0.8mrad in slope. This result depends on the particular configuration used during the test and the size of the heliostat.

  18. Overdetermined Shooting Methods for Computing Standing Water Waves with Spectral Accuracy

    CERN Document Server

    Wilkening, Jon

    2012-01-01

    A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wave amplitude and fluid depth. In t...

  19. Spatial Component Position in Total Hip Arthroplasty. Accuracy and repeatability with a new CT method

    Energy Technology Data Exchange (ETDEWEB)

    Olivecrona, H. [Soedersjukhuset, Stockholm (Sweden). Dept. of Hand Surgery; Weidenhielm, L. [Karolinska Hospital, Stockholm (Sweden). Dept. of Orthopedics; Olivecrona, L. [Karolinska Hospital, Stockholm (Sweden). Dept. of Radiology; Noz, M.E. [New York Univ. School of Medicine, NY (United States). Dept. of Radiology; Maguire, G.Q. [Royal Inst. of Tech., Kista (Sweden). Inst. for Microelectronics and Information Technology; Zeleznik, M. P. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Radiation Oncology; Svensson, L. [Royal Inst. of Tech., Stockholm (Sweden). Dept. of Mathematics; Jonson, T. [Eskadern Foeretagsutveckling AB, Goeteborg (Sweden)

    2003-03-01

    Purpose: 3D detection of centerpoints of prosthetic cup and head after total hip arthroplasty (THA) using CT. Material and Methods: Two CT examinations, 10 min apart, were obtained from each of 10 patients after THA. Two independent examiners placed landmarks in images of the prosthetic cup and head. All landmarking was repeated after 1 week. Centerpoints were calculated and compared. Results: Within volumes, all measurements of centerpoints of cup and head fell, with a 95% confidence, within one CT-voxel of any other measurement of the same object. Across two volumes, the mean error of distance between center of cup and prosthetic head was 1.4 mm (SD 0.73). Intra- and interobserver 95% accuracy limit was below 2 mm within and below 3 mm across volumes. No difference between intra- and interobserver measurements occurred. A formula for converting finite sets of point landmarks in the radiolucent tread of the cup to a centerpoint was stable. The percent difference of the landmark distances from a calculated spherical surface was within one CT-voxel. This data was normally distributed and not dependent on observer or trial. Conclusion: The true 3D position of the centers of cup and prosthetic head can be detected using CT. Spatial relationship between the components can be analyzed visually and numerically.

  20. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  1. An automated method for the evaluation of the pointing accuracy of Sun-tracking devices

    Science.gov (United States)

    Baumgartner, Dietmar J.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz; Veronig, Astrid M.; Rieder, Harald E.

    2017-03-01

    The accuracy of solar radiation measurements, for direct (DIR) and diffuse (DIF) radiation, depends significantly on the precision of the operational Sun-tracking device. Thus, rigid targets for instrument performance and operation have been specified for international monitoring networks, e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices that fulfill these accuracy requirements are available from various instrument manufacturers; however, none of the commercially available systems comprise an automatic accuracy control system allowing platform operators to independently validate the pointing accuracy of Sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system-independent, and cost-effective system for evaluating the pointing accuracy of Sun-tracking devices. We detail the monitoring system setup, its design and specifications, and the results from its application to the Sun-tracking system operated at the Kanzelhöhe Observatory (KSO) Austrian radiation monitoring network (ARAD) site. The results from an evaluation campaign from March to June 2015 show that the tracking accuracy of the device operated at KSO lies within BSRN specifications (i.e., 0.1° tracking accuracy) for the vast majority of observations (99.8 %). The evaluation of manufacturer-specified active-tracking accuracies (0.02°), during periods with direct solar radiation exceeding 300 W m-2, shows that these are satisfied in 72.9 % of observations. Tracking accuracies are highest during clear-sky conditions and on days where prevailing clear-sky conditions are interrupted by frontal movement; in these cases, we obtain the complete fulfillment of BSRN requirements and 76.4 % of observations within manufacturer-specified active-tracking accuracies. Limitations to tracking surveillance arise during overcast conditions and

  2. Cyclic voltammetric current functions determined with a prescribed accuracy by the adaptive Huber method for Abel integral equations.

    Science.gov (United States)

    Bieniasz, Lesław K

    2008-12-15

    Modern electroanalytical applications of cyclic voltammetry require that theoretical current functions be obtainable automatically, efficiently, and with a prescribed accuracy, by computer simulation algorithms. One of the classical simulation approaches relies on formulating and solving relevant integral equations. Numerical solution methods, used for this purpose so far, are nonautomatic and do not provide information about accuracy of the results. The adaptive variant of the Huber method, developed by the present author, can generate theoretical cyclic voltammograms automatically, with a given target accuracy, and more efficiently than the formerly studied patch-adaptive direct simulation method. This is demonstrated using examples of cyclic voltammograms described by the first-kind Abel integral equations. The method is therefore a candidate for automatic integral equation solvers, needed for building a new generation of problem solving environments for electroanalytical chemistry and for widely understood automation of electroanalytical investigations.

  3. Spatial and temporal analysis on the distribution of active radio-frequency identification (RFID) tracking accuracy with the Kriging method.

    Science.gov (United States)

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-10-29

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy.

  4. Spatial and Temporal Analysis on the Distribution of Active Radio-Frequency Identification (RFID Tracking Accuracy with the Kriging Method

    Directory of Open Access Journals (Sweden)

    Xin Liu

    2014-10-01

    Full Text Available Radio frequency identification (RFID technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy.

  5. Enhancing the accuracy of the Fowler method for monitoring non-constant work functions

    Science.gov (United States)

    Friedl, R.

    2016-04-01

    The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler's theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization like that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation.

  6. Domain Tuning of Bilingual Lexicons for MT

    Science.gov (United States)

    2003-02-01

    vocabulary—a set of words or terms from a document that indicate the topic or primary content of the text—is nec- essary for many NLP tasks. In monolingual ...specificity impacts the accuracy of text classification (Saku- rai, 1999). In multilingual processing, appropriate translation choices cannot be made...System A statistical MT system has 3 basic components, a language model, a translation model, and a decoder. The language model is a monolingual

  7. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    Science.gov (United States)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  8. THE USE OF THE EXPERIMENT PLANNING METHOD TO EVALUATE THE ACCURACY OF FLEXIBLE UNITS IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    O. Y. Yehorov

    2015-10-01

    Full Text Available Purpose. The identification of rolling stock on the railroads is an integral part of many automation systems as trains in general and cars separately. Various information management systems at sorting yards require the operational information about the object while performing the manufacturing operations. The improvement of the determination accuracy of different parameters characterizing the rolling stock, leads to the immediate quality progress in the traffic volumes management. The aim of the paper is to develop a method to estimate the errors of determination the interaxle distance of the flexible units in the control section using the point path-control transducer for future identification of cars and locomotives. Methodology. To achieve this goal the simulation method and experiment planning were used. The simulation model allowing determining the time intervals between the collisions of wheelset of movable units in point path-control transducer on the control section with variable characteristics of identification devices was developed. The values of the time intervals obtained with using the simulation mode were applied in the method of experiment planning to the final target. Findings. The calculated analytical values of the errors of the interaxle distances do not have the significant differences from values obtained using the simulation model. It makes possible to use the received functional dependence to estimate the possible errors in the identification of rolling stock. The results of this work can be used to identify separate flexible units, and trains in general. Originality. The functional dependence of the error of the interaxle distance error from the fixing point of the wheel path-control transducer, the distance between the sensors and the measured distance was derived using a previously conducted research of the factors influencing the error in determining the interaxle distance of the movable units, and developed

  9. ACCURACY ASSESSMENT OF CROWN DELINEATION METHODS FOR THE INDIVIDUAL TREES USING LIDAR DATA

    Directory of Open Access Journals (Sweden)

    K. T. Chang

    2016-06-01

    Full Text Available Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs. The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM from a digital surface model (DSM, and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC and a variable window filter (VWF, are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits" that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  10. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    Science.gov (United States)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  11. Characterization of Libby, MT amphibole (LA) elongated particles for toxicology studies: Field Collection, sample preparation, dose characterization, and particle counting methods using SEM/EDS

    Science.gov (United States)

    Since 1999, the US EPA and USGS have been studying the chemistry, mineralogy, and morphology of the amphiboles from the Rainy Creek Complex of Libby, MT (LA), following an increased incidence of lung and pleural diseases. LA material collected in 2000 (LA2000) was described in M...

  12. MT In Business English Translation

    Institute of Scientific and Technical Information of China (English)

    张志新

    2009-01-01

    In this article the operational principles of MT in business English translation is briefly introduced with an aim to point out that to improve the MT quality machine study is a key factor to work on.

  13. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    Science.gov (United States)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  14. System Accuracy Evaluation of Different Blood Glucose Monitoring Systems Following ISO 15197:2013 by Using Two Different Comparison Methods.

    Science.gov (United States)

    Freckmann, Guido; Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Haug, Cornelia

    2015-09-01

    Adherence to established standards (e.g., International Organization for Standardization [ISO] 15197) is important to ensure comparable and sufficient accuracy of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for different SMBG systems available in Europe with three reagent lots each. Test procedures followed the recently published revision ISO 15197:2013. Comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus™ glucose analyzer; YSI Inc., Yellow Springs, OH) and a hexokinase (cobas Integra(®) 400 Plus analyzer; Roche Instrument Center, Rotkreuz, Switzerland) method. Compliance with ISO 15197:2013 accuracy criteria was determined by calculating the percentage of results within ±15% or within ±0.83 mmol/L of the comparison measurement results for glucose concentrations at and above or below 5.55 mmol/L, respectively, and by calculating the percentage of results within consensus error grid Zones A and B. Seven systems showed with all three tested lots that 95-100% of the results were within the accuracy limits of ISO 15197:2013 and that 100% of results were within consensus error grid Zones A and B, irrespective of the comparison method used. Regarding results of individual lots, slight differences between the glucose oxidase method and the hexokinase method were found. Accuracy criteria of ISO 15197:2003 (±20% for concentrations ≥4.2 mmol/L and±0.83 mmol/L for concentrations ISO 15197:2013. The results also indicate that the comparison measurement method/system is important, as it may have a considerable impact on accuracy data obtained for a system.

  15. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    Science.gov (United States)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  16. COARSE-MESH-ACCURACY IMPROVEMENT OF BILINEAR Q4-PLANE ELEMENT BY THE COMBINED HYBRID FINITE ELEMENT METHOD

    Institute of Scientific and Technical Information of China (English)

    谢小平; 周天孝

    2003-01-01

    The combined hybrid finite element method is of an intrinsic mechanism of enhancing coarse-mesh-accuracy of lower order displacement schemes. It was confirmed that the combined hybrid scheme without energy error leads to enhancement of accuracy at coarse meshes, and that the combination parameter plays an important role in the enhancement. As an improvement of conforming bilinear Q4-plane element, the combined hybrid method adopted the most convenient quadrilateral displacements-stress mode, i. e.,the mode of compatible isoparametric bilinear displacements and pure constant stresses. By adjusting the combined parameter, the optimized version of the combined hybrid element was obtained and numerical tests indicated that this parameter-adjusted version behaves much better than Q4-element and is of high accuracy at coarse meshes. Due to elimination of stress parameters at the elemental level, this combined hybrid version is of the same computational cost as that of Q4 -element.

  17. Evaluating the accuracy of the colorimetric method of determing the concentration of nonionogenic surfactant, with use of thiocyanocobaltammonium

    Energy Technology Data Exchange (ETDEWEB)

    Safiullina, L.A.; Konyukhova, T.Z.; Shtangeyev, A.L.; Tarasova, N.I.

    1981-01-01

    An examination is made of the results of evaluating the accuracy of the colorimetric method of determining the concentration of nonionogenic surfactants (NPAV) and influence on the results of analysis of a number of factors (time from the beginning of making the analysis, method of extraction, admixtures of ions of iron, etc.). A technique is pinpointed for analysis, and sensitivity of the method is established for determining the NPAV with the use of TCCA.

  18. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    Science.gov (United States)

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  19. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    Science.gov (United States)

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.

  20. Accuracy of plant specimen disease severity estimates: concepts, history, methods, ramifications and challenges for the future

    Science.gov (United States)

    Knowledge of the extent of the symptoms of a plant disease, generally referred to as severity, is key to both fundamental and applied aspects of plant pathology. Most commonly, severity is obtained visually and the accuracy of each estimate (closeness to the actual value) by individual raters is par...

  1. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    Directory of Open Access Journals (Sweden)

    Pablo Garaizar

    Full Text Available Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.

  2. Presentation Accuracy of the Web Revisited: Animation Methods in the HTML5 Era

    Science.gov (United States)

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  3. A method to estimate the enthalpy of formation of organic compounds with chemical accuracy

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Meier, Robert J.; Sin, Gürkan

    2013-01-01

    . The model which is group-contribution (GC) based, estimates gas phase standard enthalpy of formations (ΔfH°gas) of organic compounds. To achieve the chemical accuracy, a systematic property-data-model analysis, which allows efficient use of knowledge of the experimental data of ΔfH°gas and the molecular...

  4. Patterns in Seismicity at Mt St Helens and Mt Unzen

    Science.gov (United States)

    Lamb, Oliver; De Angelis, Silvio; Lavallee, Yan

    2014-05-01

    Cyclic behaviour on a range of timescales is a well-documented feature of many dome-forming volcanoes. Previous work on Soufrière Hills volcano (Montserrat) and Volcán de Colima (Mexico) revealed broad-scale similarities in behaviour implying the potential to develop general physical models of sub-surface processes [1]. Using volcano-seismic data from Mt St Helens (USA) and Mt Unzen (Japan) this study explores parallels in long-term behaviour of seismicity at two dome-forming systems. Within the last twenty years both systems underwent extended dome-forming episodes accompanied by large Vulcanian explosions or dome collapses. This study uses a suite of quantitative and analytical techniques which can highlight differences or similarities in volcano seismic behaviour, and compare the behaviour to changes in activity during the eruptive episodes. Seismic events were automatically detected and characterized on a single short-period seismometer station located 1.5km from the 2004-2008 vent at Mt St Helens. A total of 714 826 individual events were identified from continuous recording of seismic data from 22 October 2004 to 28 February 2006 (average 60.2 events per hour) using a short-term/long-term average algorithm. An equivalent count will be produced from seismometer recordings over the later stages of the 1991-1995 eruption at MT Unzen. The event count time-series from Mt St Helens is then analysed using Multi-taper Method and the Short-Term Fourier Transform to explore temporal variations in activity. Preliminary analysis of seismicity from Mt St Helens suggests cyclic behaviour of subannual timescale, similar to that described at Volcán de Colima and Soufrière Hills volcano [1]. Frequency Index and waveform correlation tools will be implemented to analyse changes in the frequency content of the seismicity and to explore their relations to different phases of activity at the volcano. A single station approach is used to gain a fine-scale view of variations in

  5. The effect of missing marker genotypes on the accuracy of gene-assisted breeding value estimation: a comparison of methods

    NARCIS (Netherlands)

    Mulder, H.A.; Meuwissen, T.H.E.; Calus, M.P.L.; Veerkamp, R.F.

    2010-01-01

    In livestock populations, missing genotypes on a large proportion of the animals is a major problem when implementing geneassisted breeding value estimation for genes with known effect. The objective of this study was to compare different methods to deal with missing genotypes on accuracy of gene-as

  6. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study

    NARCIS (Netherlands)

    Barsingerhorn, A.D.; Boonstra, F.N.; Goossens, H.H.L.M.

    2017-01-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different

  7. Uncertainty in real-time voltage stability assessment methods based on Thevenin equivalent due to PMU’s accuracy

    DEFF Research Database (Denmark)

    Perez, Angel; Møller, Jakob Glarbo; Jóhannsson, Hjörtur;

    2014-01-01

    This article studies the influence of PMU’s accuracy in voltage stability assessment, considering the specific case of Th ́ evenin equivalent based methods that include wide-area information in its calculations. The objective was achieved by producing a set of synthesized PMU measurements from...

  8. Optimization of process parameters for dimensional accuracy in an area-forming rapid prototyping system using the Taguchi method

    Science.gov (United States)

    Chiu, Shih-Hsuan; Chen, Cheng-Chin; Chen, Kun-Ting; Su, Chun-Hao

    2015-03-01

    Rapid prototyping (RP) technologies have been extensively applied to build products in recent decades. The area-forming rapid prototyping is an emerging RP technology with the advantages of a simple procedure with a short processing time. With the expansion in fields of application, the strictness on product quality has also increased. The dimensional accuracy of a product is one of the most critical quality characteristics. In order to improve the dimensional accuracy of a product from an area-forming RP system, this study optimizes the seven factors via the Taguchi method, and the result is verified with an extra sample.

  9. Accuracy of optical scanning methods of the Cerec®3D system in the process of making ceramic inlays

    Directory of Open Access Journals (Sweden)

    Trifković Branka

    2010-01-01

    Full Text Available Background/Aim. One of the results of many years of Cerec® 3D CAD/CAM system technological development is implementation of one intraoral and two extraoral optical scanning methods which, depending on the current indications, are applied in making fixed restorations. The aim of this study was to determine the degree of precision of optical scanning methods by the use of the Cerec®3D CAD/CAM system in the process of making ceramic inlays. Methods. The study was conducted in three experimental groups of inlays prepared using the procedure of three methods of scanning Cerec ®3D system. Ceramic inlays made by conventional methodology were the control group. The accuracy of optical scanning methods of the Cerec®3D system computer aided designcomputer aided manufacturing (CAD/CAM was indirectly examined by measuring a marginal gap size between inlays and demarcation preparation by scanning electron microscope (SEM. Results. The results of the study showed a difference in the accuracy of the existing methods of scanning dental CAD/CAM systems. The highest level of accuracy was achieved by the extraoral optical superficial scanning technique. The value of marginal gap size inlays made with the technique of extraoral optical superficial scanning was 32.97 ± 13.17 μ. Techniques of intraoral optical superficial and extraoral point laser scanning showed a lower level of accuracy (40.29 ± 21.46 μ for inlays of intraoral optical superficial scanning and 99.67 ± 37.25 μ for inlays of extraoral point laser scanning. Conclusion. Optical scanning methods in dental CAM/CAM technologies are precise methods of digitizing the spatial models; application of extraoral optical scanning methods provides the hightest precision.

  10. MtDNA COI-COII marker and drone congregation area: an efficient method to establish and monitor honeybee (Apis mellifera L.) conservation centres.

    Science.gov (United States)

    Bertrand, Bénédicte; Alburaki, Mohamed; Legout, Hélène; Moulin, Sibyle; Mougel, Florence; Garnery, Lionel

    2015-05-01

    Honeybee subspecies have been affected by human activities in Europe over the past few decades. One such example is the importation of nonlocal subspecies of bees which has had an adverse impact on the geographical repartition and subsequently on the genetic diversity of the black honeybee Apis mellifera mellifera. To restore the original diversity of this local honeybee subspecies, different conservation centres were set up in Europe. In this study, we established a black honeybee conservation centre Conservatoire de l'Abeille Noire d'Ile de France (CANIF) in the region of Ile-de-France, France. CANIF's honeybee colonies were intensively studied over a 3-year period. This study included a drone congregation area (DCA) located in the conservation centre. MtDNA COI-COII marker was used to evaluate the genetic diversity of CANIF's honeybee populations and the drones found and collected from the DCA. The same marker (mtDNA) was used to estimate the interactions and the haplotype frequency between CANIF's honeybee populations and 10 surrounding honeybee apiaries located outside of the CANIF. Our results indicate that the colonies of the conservation centre and the drones of the DCA show similar stable profiles compared to the surrounding populations with lower level of introgression. The mtDNA marker used on both DCA and colonies of the conservation centre seems to be an efficient approach to monitor and maintain the genetic diversity of the protected honeybee populations.

  11. HIGH-ACCURACY BAND TO BAND REGISTRATION METHOD FOR MULTI-SPECTRAL IMAGES OF HJ-1A/B

    Institute of Scientific and Technical Information of China (English)

    Lu Hao; Liu Tuanjie; Zhao Haiqing

    2012-01-01

    Band-to-band registration accuracy is an important parameter of multispectral data.A novel band-to-band registration approach with high precision is proposed for the multi-spectral images of HJ-1A/B.Firstly,the main causes resulted in misregistration are analyzed,and a high-order polynomial model is proposed.Secondly,a phase fringe filtering technique is employed to Phase Correlation Method based on Singular Value Decomposition (SVD-PCM) for reducing the noise in phase difference matrix.Then,experiments are carried out to build nonlinear registration models,and images of green band and red band are aligned to blue band with an accuracy of 0.1 pixels,while near infrared band with an accuracy of 0.2 pixels.

  12. Methods specification for diagnostic test accuracy studies in fine-needle aspiration cytology: a survey of reporting practice.

    Science.gov (United States)

    Schmidt, Robert L; Factor, Rachel E; Affolter, Kajsa E; Cook, Joshua B; Hall, Brian J; Narra, Krishna K; Witt, Benjamin L; Wilson, Andrew R; Layfield, Lester J

    2012-01-01

    Diagnostic test accuracy (DTA) studies on fine-needle aspiration cytology (FNAC) often show considerable variability in diagnostic accuracy between study centers. Many factors affect the accuracy of FNAC. A complete description of the testing parameters would help make valid comparisons between studies and determine causes of performance variation. We investigated the manner in which test conditions are specified in FNAC DTA studies to determine which parameters are most commonly specified and the frequency with which they are specified and to see whether there is significant variability in reporting practice. We identified 17 frequently reported test parameters and found significant variation in the reporting of these test specifications across studies. On average, studies reported 5 of the 17 items that would be required to specify the test conditions completely. A more complete and standardized reporting of methods, perhaps by means of a checklist, would improve the interpretation of FNAC DTA studies.

  13. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  14. Accuracy of ELISA detection methods for gluten and reference materials: a realistic assessment.

    Science.gov (United States)

    Diaz-Amigo, Carmen; Popping, Bert

    2013-06-19

    The determination of prolamins by ELISA and subsequent conversion of the resulting concentration to gluten content in food appears to be a comparatively simple and straightforward process with which many laboratories have years-long experience. At the end of the process, a value of gluten, expressed in mg/kg or ppm, is obtained. This value often is the basis for the decision if a product can be labeled gluten-free or not. On the basis of currently available scientific information, the accuracy of the obtained values with commonly used commercial ELISA kits has to be questioned. Although recently several multilaboratory studies have been conducted in an attempt to emphasize and ensure the accuracy of the results, data suggest that it was the precision of these assays, not the accuracy, that was confirmed because some of the underlying assumptions for calculating the gluten content lack scientific data support as well as appropriate reference materials for comparison. This paper discusses the issues of gluten determination and quantification with respect to antibody specificity, extraction procedures, reference materials, and their commutability.

  15. Analysis of reliability, accuracy, sensitivity and predictive value of a subjective method to classify facial pattern in adults

    Science.gov (United States)

    Queiroz, Gilberto Vilanova; Rino, José; de Paiva, João Batista; Capelozza, Leopoldino

    2016-01-01

    ABSTRACT Introduction: Craniofacial pattern diagnosis is vital in Orthodontics, as it influences decision-making regarding treatment options and prognosis. Capelozza Filho proposed a subjective method for facial classification comprising five patterns: I, II, III, Long Face and Short Face. Objective: To investigate the accuracy of a subjective classification method of facial patterns applied to adults. Methods: A sample consisting of 52 adults was used for this study. Frontal and lateral view photographs were taken with subjects at rest position, including frontal smile. Lateral cephalometric radiographs were organized in a PowerPoint® presentation and submitted to 20 raters. Method performance was assessed by examining reproducibility with Kappa test and calculating accuracy, sensitivity and positive predictive values, for which 70% was set as critical value. The gold standard of the classification was personally set by the author of the method. Results: Reproducibility was considered moderate (Kappa = 0.501); while accuracy, sensitivity and positive predictive values yielded similar results, but below 70%. Conclusions: The subjective method of facial classification employed in the present study still needs to have its morphological criteria improved in order to be used to discriminate the five facial patterns. PMID:28125141

  16. Ensemble learning algorithms for classification of mtDNA into haplogroups.

    Science.gov (United States)

    Wong, Carol; Li, Yuran; Lee, Chih; Huang, Chun-Hsi

    2011-01-01

    Classification of mitochondrial DNA (mtDNA) into their respective haplogroups allows the addressing of various anthropologic and forensic issues. Unique to mtDNA is its abundance and non-recombining uni-parental mode of inheritance; consequently, mutations are the only changes observed in the genetic material. These individual mutations are classified into their cladistic haplogroups allowing the tracing of different genetic branch points in human (and other organisms) evolution. Due to the large number of samples, it becomes necessary to automate the classification process. Using 5-fold cross-validation, we investigated two classification techniques on the consented database of 21 141 samples published by the Genographic project. The support vector machines (SVM) algorithm achieved a macro-accuracy of 88.06% and micro-accuracy of 96.59%, while the random forest (RF) algorithm achieved a macro-accuracy of 87.35% and micro-accuracy of 96.19%. In addition to being faster and more memory-economic in making predictions, SVM and RF are better than or comparable to the nearest-neighbor method employed by the Genographic project in terms of prediction accuracy.

  17. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    Science.gov (United States)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  18. Accuracy of imaging methods for detection of bone tissue invasion in patients with oral squamous cell carcinoma

    Science.gov (United States)

    Uribe, S; Rojas, LA; Rosas, CF

    2013-01-01

    The objective of this review is to evaluate the diagnostic accuracy of imaging methods for detection of mandibular bone tissue invasion by squamous cell carcinoma (SCC). A systematic review was carried out of studies in MEDLINE, SciELO and ScienceDirect, published between 1960 and 2012, in English, Spanish or German, which compared detection of mandibular bone tissue invasion via different imaging tests against a histopathology reference standard. Sensitivity and specificity data were extracted from each study. The outcome measure was diagnostic accuracy. We found 338 articles, of which 5 fulfilled the inclusion criteria. Tests included were: CT (four articles), MRI (four articles), panoramic radiography (one article), positron emission tomography (PET)/CT (one article) and cone beam CT (CBCT) (one article). The quality of articles was low to moderate and the evidence showed that all tests have a high diagnostic accuracy for detection of mandibular bone tissue invasion by SCC, with sensitivity values of 94% (MRI), 91% (CBCT), 83% (CT) and 55% (panoramic radiography), and specificity values of 100% (CT, MRI, CBCT), 97% (PET/CT) and 91.7% (panoramic radiography). Available evidence is scarce and of only low to moderate quality. However, it is consistently shown that current imaging methods give a moderate to high diagnostic accuracy for the detection of mandibular bone tissue invasion by SCC. Recommendations are given for improving the quality of future reports, in particular provision of a detailed description of the patients' conditions, the imaging instrument and both imaging and histopathological invasion criteria. PMID:23420854

  19. Analysis of reliability, accuracy, sensitivity and predictive value of a subjective method to classify facial pattern in adults.

    Science.gov (United States)

    Queiroz, Gilberto Vilanova; Rino, José; Paiva, João Batista de; Capelozza, Leopoldino

    2016-01-01

    Craniofacial pattern diagnosis is vital in Orthodontics, as it influences decision-making regarding treatment options and prognosis. Capelozza Filho proposed a subjective method for facial classification comprising five patterns: I, II, III, Long Face and Short Face. To investigate the accuracy of a subjective classification method of facial patterns applied to adults. A sample consisting of 52 adults was used for this study. Frontal and lateral view photographs were taken with subjects at rest position, including frontal smile. Lateral cephalometric radiographs were organized in a PowerPoint® presentation and submitted to 20 raters. Method performance was assessed by examining reproducibility with Kappa test and calculating accuracy, sensitivity and positive predictive values, for which 70% was set as critical value. The gold standard of the classification was personally set by the author of the method. Reproducibility was considered moderate (Kappa = 0.501); while accuracy, sensitivity and positive predictive values yielded similar results, but below 70%. The subjective method of facial classification employed in the present study still needs to have its morphological criteria improved in order to be used to discriminate the five facial patterns.

  20. Accuracy Validation of an Automated Method for Prostate Segmentation in Magnetic Resonance Imaging.

    Science.gov (United States)

    Shahedi, Maysam; Cool, Derek W; Bauman, Glenn S; Bastian-Jordan, Matthew; Fenster, Aaron; Ward, Aaron D

    2017-03-24

    Three dimensional (3D) manual segmentation of the prostate on magnetic resonance imaging (MRI) is a laborious and time-consuming task that is subject to inter-observer variability. In this study, we developed a fully automatic segmentation algorithm for T2-weighted endorectal prostate MRI and evaluated its accuracy within different regions of interest using a set of complementary error metrics. Our dataset contained 42 T2-weighted endorectal MRI from prostate cancer patients. The prostate was manually segmented by one observer on all of the images and by two other observers on a subset of 10 images. The algorithm first coarsely localizes the prostate in the image using a template matching technique. Then, it defines the prostate surface using learned shape and appearance information from a set of training images. To evaluate the algorithm, we assessed the error metric values in the context of measured inter-observer variability and compared performance to that of our previously published semi-automatic approach. The automatic algorithm needed an average execution time of ∼60 s to segment the prostate in 3D. When compared to a single-observer reference standard, the automatic algorithm has an average mean absolute distance of 2.8 mm, Dice similarity coefficient of 82%, recall of 82%, precision of 84%, and volume difference of 0.5 cm(3) in the mid-gland. Concordant with other studies, accuracy was highest in the mid-gland and lower in the apex and base. Loss of accuracy with respect to the semi-automatic algorithm was less than the measured inter-observer variability in manual segmentation for the same task.

  1. Therapy evaluation and diagnostic accuracy in neuroendocrine tumours: assessment of radiological methods

    Energy Technology Data Exchange (ETDEWEB)

    Elvin, A.

    1993-01-01

    The diagnostic accuracy of ultrasonically guided biopsy-gun biopsies was assessed in a group of 47 patients with suspected pancreatic carcinoma. A correct diagnosis was obtained in 44 of the 47 patients (94%). Twenty-five patients with known neuroendocrine tumour disease were biopsied with 1.2 mm and 0.9 mm biopsy-gun needles. The influence of treatment-related fibrosis was also evaluated. The overall diagnostic accuracy with the 0.9 mm needle was 69% as compared to 92% with the 1.2 mm needle. In order to assess the diagnostic accuracy rate for radiologists with different experience of biopsy procedures 175 cases of renal biopsy-gun biopsies were evaluated. No statistical significant difference was found between the different operators. The role of duplex Doppler ultrasound in monitoring interferon treatment-related changes in carcinoid metastases was evaluated. It present duplex Doppler ultrasound does not seem to play a role in the evaluation of tumour therapy in carcinoid patients. Therapy response evaluation was performed with MR imaging in a group of 17 patients with neuroendocrine liver metastases. A significant difference was found between patients responding to and patients with failure of treatment in terms of tumour T1, contrast enhancement and signal intensity ratio. This indicates that MR investigation may be used in therapy monitoring of patients with neuroendocrine metastases. The neuroendocrine-differentiated colonic carcinoma cell line (LCC-18) was transplanted to 29 mice to establish a tumour/animal model that would allow the monitoring of changes with MR imaging induced by interferon therapy and to evaluate whether the therapeutic response could be modulated by different interferon dosages. Interferon does not seem to have any prolonged anti-proliferative effect on the LCC-18 tumour cell line when transplanted to nude mice.

  2. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    Science.gov (United States)

    Garmay, Andrey V.; Oskolok, Kirill V.

    2017-04-01

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  3. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    Energy Technology Data Exchange (ETDEWEB)

    Garmay, Andrey V., E-mail: andrew-garmay@yandex.ru; Oskolok, Kirill V.

    2017-04-15

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  4. Application of the PM6 semi-empirical method to modeling proteins enhances docking accuracy of AutoDock.

    Science.gov (United States)

    Bikadi, Zsolt; Hazai, Eszter

    2009-09-11

    Molecular docking methods are commonly used for predicting binding modes and energies of ligands to proteins. For accurate complex geometry and binding energy estimation, an appropriate method for calculating partial charges is essential. AutoDockTools software, the interface for preparing input files for one of the most widely used docking programs AutoDock 4, utilizes the Gasteiger partial charge calculation method for both protein and ligand charge calculation. However, it has already been shown that more accurate partial charge calculation - and as a consequence, more accurate docking- can be achieved by using quantum chemical methods. For docking calculations quantum chemical partial charge calculation as a routine was only used for ligands so far. The newly developed Mozyme function of MOPAC2009 allows fast partial charge calculation of proteins by quantum mechanical semi-empirical methods. Thus, in the current study, the effect of semi-empirical quantum-mechanical partial charge calculation on docking accuracy could be investigated. The docking accuracy of AutoDock 4 using the original AutoDock scoring function was investigated on a set of 53 protein ligand complexes using Gasteiger and PM6 partial charge calculation methods. This has enabled us to compare the effect of the partial charge calculation method on docking accuracy utilizing AutoDock 4 software. Our results showed that the docking accuracy in regard to complex geometry (docking result defined as accurate when the RMSD of the first rank docking result complex is within 2 A of the experimentally determined X-ray structure) significantly increased when partial charges of the ligands and proteins were calculated with the semi-empirical PM6 method. Out of the 53 complexes analyzed in the course of our study, the geometry of 42 complexes were accurately calculated using PM6 partial charges, while the use of Gasteiger charges resulted in only 28 accurate geometries. The binding affinity estimation was

  5. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods

    Science.gov (United States)

    Esquinas, Pedro L.; Uribe, Carlos F.; Gonzalez, M.; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O.; Celler, Anna

    2017-08-01

    The main applications of 188Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object’s true activity. Each object’s activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial γ -camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte

  6. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  7. A two-zone method with an enhanced accuracy for a numerical solution of the diffusion equation

    Science.gov (United States)

    Cheon, Jin-Sik; Koo, Yang-Hyun; Lee, Byung-Ho; Oh, Je-Yong; Sohn, Dong-Seong

    2006-12-01

    A variational principle is applied to the diffusion equation to numerically obtain the fission gas release from a spherical grain. The two-zone method, originally proposed by Matthews and Wood, is modified to overcome its insufficient accuracy for a low release. The results of the variational approaches are examined by observing the gas concentration along the grain radius. At the early stage, the concentration near the grain boundary is higher than that at the inner points of the grain in the cases of the two-zone method as well as the finite element analysis with the number of the elements at as many as 10. The accuracy of the two-zone method is considerably enhanced by relocating the nodal points of the two zones. The trial functions are derived as a function of the released fraction. During the calculations, the number of degrees of freedom needs to be reduced to guarantee physically admissible concentration profiles. Numerical verifications are performed extensively. By taking a computational time comparable to the algorithm by Forsberg and Massih, the present method provides a solution with reasonable accuracy in the whole range of the released fraction.

  8. Reconstructing Winter North Pacific Sea-Level Pressure Anomalies Over the Past Three Centuries Using a New Calibration Method with the Eclipse and Mt. Logan Ice Cores

    Science.gov (United States)

    Kelsey, E. P.; Wake, C. P.; Osterberg, E. C.

    2012-12-01

    A deeper understanding of the behavior of North Pacific extratropical cyclones and anticyclones prior to the instrumental era is needed to advance our understanding of North Pacific climate variability. To help achieve this objective, we develop and use a new nonlinear ice core calibration procedure with the Eclipse (3017 m a.s.l.) and Mt. Logan (5400 m a.s.l.) ice core records from Yukon, Canada to isolate the ranges of ice core values that are consistently associated with North Pacific wintertime sea-level pressure (SLP) anomalies. Over the calibration period (1872-2001), each ice core record is ranked and divided into 10 groups of 13 years. Then for each group, the frequency of positive and negative SLP anomalies at each grid point is contoured and the composite mean SLP anomaly values are shaded. These plots elucidate areas where statistically significant SLP anomalies occur frequently in association with groups of ice core values. This new calibration procedure shows that the lowest and the two highest groups of Mt. Logan annual [Na+] are sensitive to SLP anomalies in the central and eastern Pacific and the second lowest [Na+] group is sensitive to western Pacific SLP anomalies. The highest and lowest Eclipse cold-season accumulation groups are most sensitive to SLP anomalies more distant in the western and central Pacific. This result is surprising in light of stable isotope studies suggesting a more distant moisture source for Mt. Logan. A reconstruction using these calibrated records indicates the Aleutian Low was predominantly weaker than average between 1699-1871. Our results highlight that having these geographically close ice core records is important to developing a deeper understanding of North Pacific climate variability.

  9. Energy levels of interacting curved nanomagnets in a frustrated geometry: increasing accuracy when using finite difference methods.

    Science.gov (United States)

    Riahi, H; Montaigne, F; Rougemaille, N; Canals, B; Lacour, D; Hehn, M

    2013-07-24

    The accuracy of finite difference methods is related to the mesh choice and cell size. Concerning the micromagnetism of nano-objects, we show here that discretization issues can drastically affect the symmetry of the problem and therefore the resulting computed properties of lattices of interacting curved nanomagnets. In this paper, we detail these effects for the multi-axis kagome lattice. Using the OOMMF finite difference method, we propose an alternative way of discretizing the nanomagnet shape via a variable moment per cell scheme. This method is shown to be efficient in reducing discretization effects.

  10. The analysis of the accuracy of the wheel alignment inspection method on the side-slip plate stand

    Science.gov (United States)

    Gajek, A.; Strzępek, P.

    2016-09-01

    The article presents the theoretical basis and the results of the examination of the wheel alignment inspection method on the slide slip plate stand. It is obligatory test during periodic technical inspection of the vehicle. The measurement is executed in the dynamic conditions. The dependence between the lateral displacement of the plate and toe-in of the tested wheels has been shown. If the diameter of the wheel rim is known then the value of the toe-in can be calculated. The comparison of the toe-in measurements on the plate stand and on the four heads device for the wheel alignment inspection has been carried out. The accuracy of the measurements and the influence of the conditions of the tests on the plate stand (the way of passing through the plate) were estimated. The conclusions about the accuracy of this method are presented.

  11. IMPLEMENTATION OF SEMI-EMPIRICAL MODELS TO ENHANCE THE ACCURACY OF PANEL METHODS FOR DRAG PREDICTION AT SUPERSONIC SPEEDS

    Directory of Open Access Journals (Sweden)

    Abdulkareem Shafiq Mahdi Al-Obaidi

    2011-12-01

    Full Text Available 0 0 1 328 1876 International Islamic University 15 4 2200 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman";} This paper introduces an attempt to enhance the accuracy of panel methods. A low-order panel method is selected and coupled with semi-empirical methods to enhance the accuracy of drag prediction of flying bodies at supersonic speeds. The semi-empirical methods are used to improve the accuracy of drag prediction by mathematical modelling of viscosity, base drag, and drag due to wing-body interference. Both methods were implemented by a computer program and validated against experimental and analytical results. The comparisons show that a considerable improvement has been achieved for the selected panel method for prediction of drag coefficients. In general, accuracy within an average value of -4.4% was obtained for the enhanced panel method. Such accuracy could be considered acceptable for the preliminary design stages of supersonic flying bodies such as projectiles and missiles. The developed computer program gives satisfactory results as long as the considered configurations are slender and the angles of attack are small (below stall angle.  ABSTRAK: Kertas kerja ini memperkenalkan percubaan untuk  mempertingkatkan ketepatan kaedah panel. Kaedah panel tertib rendah telah dipilih dan digabungkan dengan kaedah separa empirik untuk mempertingkatkan ketepatan ramalan seret objek terbang pada kelajuan supersonik. Kaedah semi empirikal yang digunakan untuk meningkatkan ketepatan jangkaan seret menggunakan model matematik bagi kelikatan, seretan dasar, dan  seretan disebabkan  oleh

  12. Accuracy of binding mode prediction with a cascadic stochastic tunneling method.

    Science.gov (United States)

    Fischer, Bernhard; Basili, Serena; Merlitz, Holger; Wenzel, Wolfgang

    2007-07-01

    We investigate the accuracy of the binding modes predicted for 83 complexes of the high-resolution subset of the ASTEX/CCDC receptor-ligand database using the atomistic FlexScreen approach with a simple forcefield-based scoring function. The median RMS deviation between experimental and predicted binding mode was just 0.83 A. Over 80% of the ligands dock within 2 A of the experimental binding mode, for 60 complexes the docking protocol locates the correct binding mode in all of ten independent simulations. Most docking failures arise because (a) the experimental structure clashed in our forcefield and is thus unattainable in the docking process or (b) because the ligand is stabilized by crystal water.

  13. Hyperbolic Method for Dispersive PDEs: Same High-Order of Accuracy for Solution, Gradient, and Hessian

    Science.gov (United States)

    Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki

    2016-01-01

    In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.

  14. Mt. Vesuvius, Italy

    Science.gov (United States)

    2001-01-01

    This ASTER image of Mt. Vesuvius Italy was acquired September 26, 2000, and covers an area of 36 by 45 km. Vesuvius overlooks the city of Naples and the Bay of Naples in central Italy. In 79 AD, Vesuvius erupted cataclysmically, burying all of the surrounding cites with up to 30 m of ash. The towns of Pompeii and Herculanaeum were rediscovered in the 18th century, and excavated in the 20th century. They provide a snapshot of Roman life from 2000 years ago: perfectly preserved are wooden objects, food items, and the casts of hundreds of victims. Vesuvius is intensively monitored for potential signs of unrest that could signal the beginning of another eruption. The image is centered at 40.8 degrees north latitude, 14.4 degrees east longitude. The U.S. science team is located at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA's Science Mission Directorate.

  15. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  16. On the efficiency and accuracy of interpolation methods for spectral codes

    NARCIS (Netherlands)

    Hinsberg, van M.A.T.; Thije Boonkkamp, ten J.H.M.; Toschi, F.; Clercx, H.J.H.

    2012-01-01

    In this paper a general theory for interpolation methods on a rectangular grid is introduced. By the use of this theory an efficient B-spline-based interpolation method for spectral codes is presented. The theory links the order of the interpolation method with its spectral properties. In this way m

  17. FDG-PET/CT characterization of adrenal nodules: diagnostic accuracy and interreader agreement using quantitative and qualitative methods.

    Science.gov (United States)

    Evans, Paul D; Miller, Chad M; Marin, Daniele; Stinnett, Sandra S; Wong, Terence Z; Paulson, Erik K; Ho, Lisa M

    2013-08-01

    To determine interreader agreement and diagnostic accuracy across varying levels of reader experience using qualitative and quantitative methods of evaluating adrenal nodules using ((18)F)-fluorodeoxyglucose-positron emission tomography/computed tomography. 132 adrenal nodules (96 adenomas, 36 metastases) were retrospectively identified in 105 patients (49 men and 56 women, mean age 66 years, age range 45-85 years) with a history of lung cancer who underwent ((18)F)-fluorodeoxyglucose-positron emission tomography/computed tomography. For each nodule, three readers independently performed one qualitative and two quantitative measurements: visual assessment, standardized uptake value (SUVmax), and standard uptake ratio (SUVratio). Interreader agreement was calculated using percent agreement with κ statistic for qualitative analysis and intraclass correlation coefficient (ICC) for quantitative analysis. Accuracy, sensitivity, and specificity for distinguishing benign from malignant adrenal nodules were calculated for each method. Percent agreement between readers for visual (qualitative) assessment was 92% to 96% and κ statistic was 0.79 to 0.90 (95% confidence limits 0.66-0.99). ICC for SUVmax was 92% to 99% (95% CL 0.8-1.0), and ICC for SUVratio was 89% to 99% (95% CL 0.74-0.99). For diagnosis of malignancy, mean sensitivity and specificity for visual assessment were 80% and 97%, respectively. Mean sensitivity and specificity for SUVmax were 91% and 81%, respectively; for SUVratio, 90% and 80%. Mean diagnostic accuracy was 93%, 83%, and 84% for visual assessment, SUVmax, and SUVratio, respectively. Excellent interreader agreement is seen for quantitative and qualitative methods of distinguishing benign from malignant adrenal nodules. Qualitative analysis demonstrated higher accuracy but lower sensitivity compared with quantitative analysis. Published by Elsevier Inc.

  18. Accuracy of Demirjian′s 8 teeth method for age prediction in South Indian children: A comparative study

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed

    2015-01-01

    Full Text Available Introduction: Demirjian′s method of tooth development is most commonly used to assess age in individuals with emerging teeth. However, its application on numerous populations has resulted in wide variations in age estimates and consequent suggestions for the method′s adaptation to the local sample. Original Demirjian′s method utilized seven mandibular teeth, to which recently third molar is added so that the method can be applied on a wider age group. Furthermore, the revised method developed regression formulas for assessing age. In Indians, as these formulas resulted in underestimation, India-specific regression formulas were developed recently. The purpose of this cross-sectional study was to evaluate the accuracy and applicability of original regression formulas (Chaillet and Demirjian 2004 and India-specific regression formulas (Acharya 2010 using Demirjian′s 8 teeth method in South Indian children of age groups 9-20 years. Methods: The present study consisted of 660 randomly selected subjects (330 males and 330 females were in the aged ranging from 9 to 20 years divided into 11 groups according to their age. Demirjian′s 8 teeth method was used for staging of teeth. Results: Demirjian′s method underestimated the dental age (DA by 1.66 years for boys and 1.55 years for girls and 1.61 years in total. Acharya′s method over estimated DA by 0.21 years for boys and 0.85 years for girls and 0.53 years in total. The absolute accuracy was better for Acharya′s method compared with Demirjian method. Conclusion: This study concluded that both the Demirjian and Indian regression formulas were reliable in assessing age making Demirjian′s 8 teeth method applicable for South Indians.

  19. Accuracy verification of a simple local three-dimensional displacement measurement method of DIC with two images coordinates

    Indian Academy of Sciences (India)

    MING-HSIANG SHIH; SHIH-HENG TUNG; HAN-WEI HSIAO; WEN-PEI SUNG

    2016-04-01

    There are two methods applied for three-dimensional digital image correlation method to measure three-dimensional displacement. One is to measure the spatial coordinates of measuring points by analyzing the images. Then, the displacement vectors of these points can be calculated using the spatial coordinates of these points obtained at different stages. The other is to calibrate the parameters for individual measuring points locally. Then, the local displacements of these points can be measured directly. This study proposes a simple local three-dimensional displacement measurement method. Without any complicated distortion correction processes, this method can be used to measure small displacement in the three-dimensional space through asimple calibration process. A laboratory experiment and field experiment are carried out to prove the accuracy of this proposed method. Laboratory test errors of one-dimensional experiment are similar to the accuracy of theXYZ table; the error in Z-direction is only 0.0025% of the object distance. The measurement error of laboratory test is about 0.0033% of the object distance for local three-dimensional displacement measurement test. Test and analysis results of field test display that in-plane displacement error is only 0.12 mm, and the out-of-plane error is 1.1 mm for 20 m 9 30 m measuring range. The out-of-plane error is only about 10 PPM of the object distance. These test and analysis results show that this proposed method can achieve very high accuracy under small displacement for both of laboratory and field tests.

  20. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    OpenAIRE

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods ...

  1. Cluster model for the ionic product of water: accuracy and limitations of common density functional methods.

    Science.gov (United States)

    Svozil, Daniel; Jungwirth, Pavel

    2006-07-27

    In the present study, the performance of six popular density functionals (B3LYP, PBE0, BLYP, BP86, PBE, and SVWN) for the description of the autoionization process in the water octamer was studied. As a benchmark, MP2 energies with complete basis sets limit extrapolation and CCSD(T) correction were used. At this level, the autoionized structure lies 28.5 kcal.mol(-1) above the neutral water octamer. Accounting for zero-point energy lowers this value by 3.0 kcal.mol(-1). The transition state of the proton transfer reaction, lying only 0.7 kcal.mol(-1) above the energy of the ionized system, was identified at the MP2/aug-cc-pVDZ level of theory. Different density functionals describe the reactant and product with varying accuracy, while they all fail to characterize the transition state. We find improved results with hybrid functionals compared to the gradient-corrected ones. In particular, B3LYP describes the reaction energetics within 2.5 kcal.mol(-1) of the benchmark value. Therefore, this functional is suggested to be preferably used both for Carr-Parinello molecular dynamics and for quantum mechanics/molecular mechanics (QM/MM) simulations of autoionization of water.

  2. High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Jong-Seok Kim

    2016-01-01

    Full Text Available The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.

  3. The effect of missing marker genotypes on the accuracy of gene-assisted breeding value estimation: a comparison of methods.

    Science.gov (United States)

    Mulder, H A; Meuwissen, T H E; Calus, M P L; Veerkamp, R F

    2010-01-01

    In livestock populations, missing genotypes on a large proportion of the animals is a major problem when implementing gene-assisted breeding value estimation for genes with known effect. The objective of this study was to compare different methods to deal with missing genotypes on accuracy of gene-assisted breeding value estimation for identified bi-allelic genes using Monte Carlo simulation. A nested full-sib half-sib structure was simulated with a mixed inheritance model with one bi-allelic quantitative trait loci (QTL) and a polygenic effect due to infinite number of polygenes. The effect of the QTL was included in gene-assisted BLUP either by random regression on predicted gene content, i.e. the number of positive alleles, or including haplotype effects in the model with an inverse IBD matrix to account for identity-by-descent relationships between haplotypes using linkage analysis information (IBD-LA). The inverse IBD matrix was constructed using segregation indicator probabilities obtained from multiple marker iterative peeling. Gene contents for unknown genotypes were predicted using either multiple marker iterative peeling or mixed model methodology. For both methods, gene-assisted breeding value estimation increased accuracies of total estimated breeding value (EBV) with 0% to 22% for genotyped animals in comparison to conventional breeding value estimation. For animals that were not genotyped, the increase in accuracy was much lower (0% to 5%), but still substantial when the heritability was 0.1 and when the QTL explained at least 15% of the genetic variance. Regression on predicted gene content yielded higher accuracies than IBD-LA. Allele substitution effects were, however, overestimated, especially when only sires and males in the last generation were genotyped. For juveniles without phenotypic records and traits measured only on females, the superiority of regression on gene content over IBD-LA was larger than when all animals had phenotypes. Missing

  4. A simple method for measuring the superhydrophobic contact angle with high accuracy

    Science.gov (United States)

    Hung, Yi-Lin; Chang, Yao-Yuan; Wang, Meng-Jiy; Lin, Shi-Yow

    2010-06-01

    A modified selected-plane method for contact angle (θ) measurement is proposed in this study that avoids the difficulty of finding the real contact point and image-distortion effects adjacent to the contact point. This method is particularly suitable for superhydrophobic surfaces. The sessile-drop method coupled with the tangent line is the most popular method to find the contact angle in literature, but it entails unavoidable errors in determining the air-solid base line due to the smoothness problem and substrate tilting. In addition, the tangent-line technique requires finding the actual contact point. The measurement error due to the base line problem becomes more profound for superhydrophobic surfaces. A larger θ deviation results from a more superhydrophobic surface with a fixed base line error. The proposed modified selected-plane method requires only four data points (droplet apex, droplet height, and two interfacial loci close to the air-solid interface), avoiding the problem of the sessile-drop-tangent method in finding the contact point and saving the trouble of the sessile-drop-fitting method for best fitting of the numerous edge points with the theoretical profile. A careful error analysis was performed, and a user-friendly program was provided in this work. This method resulted in an accurate θ measurement and a method that was much improved over the classical selected plane and the sessile-drop-tangent methods. The θ difference between this method and the sessile-drop-fitting method was found to be less than three degrees.

  5. Evaluation of a finite-element reciprocity method for epileptic EEG source localization: Accuracy, computational complexity and noise robustness

    DEFF Research Database (Denmark)

    Shirvany, Yazdan; Rubæk, Tonny; Edelvik, Fredrik

    2013-01-01

    The aim of this paper is to evaluate the performance of an EEG source localization method that combines a finite element method (FEM) and the reciprocity theorem.The reciprocity method is applied to solve the forward problem in a four-layer spherical head model for a large number of test dipoles...... noise and electrode misplacement.The results show approximately 3% relative error between numerically calculated potentials done by the reciprocity theorem and the analytical solutions. When adding EEG noise with SNR between 5 and 10, the mean localization error is approximately 4.3 mm. For the case...... with 10 mm electrode misplacement the localization error is 4.8 mm. The reciprocity EEG source localization speeds up the solution of the inverse problem with more than three orders of magnitude compared to the state-of-the-art methods.The reciprocity method has high accuracy for modeling the dipole...

  6. Higher accuracy analytical approximations to a nonlinear oscillator with discontinuity by He's homotopy perturbation method

    Energy Technology Data Exchange (ETDEWEB)

    Belendez, A. [Departamento de Fisica, Ingenieria de Sistemas y Teoria de la Senal, Universidad de Alicante, Apartado 99, E-03080 Alicante (Spain)], E-mail: a.belendez@ua.es; Hernandez, A.; Belendez, T.; Neipp, C.; Marquez, A. [Departamento de Fisica, Ingenieria de Sistemas y Teoria de la Senal, Universidad de Alicante, Apartado 99, E-03080 Alicante (Spain)

    2008-03-17

    He's homotopy perturbation method is used to calculate higher-order approximate periodic solutions of a nonlinear oscillator with discontinuity for which the elastic force term is proportional to sgn(x). We find He's homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate period of less than 1.56% for all values of oscillation amplitude, while this relative error is 0.30% for the second iteration and as low as 0.057% when the third-order approximation is considered. Comparison of the result obtained using this method with those obtained by different harmonic balance methods reveals that He's homotopy perturbation method is very effective and convenient.

  7. Adjoined Piecewise Linear Approximations (APLAs) for Equating: Accuracy Evaluations of a Postsmoothing Equating Method

    Science.gov (United States)

    Moses, Tim

    2013-01-01

    The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…

  8. A Simple HPLC-DAD Method for the Analysis of Melamine in Protein Supplements: Validation Using the Accuracy Profiles

    Directory of Open Access Journals (Sweden)

    Domenico Montesano

    2013-01-01

    Full Text Available The study presents a fully validated simple high-performance liquid chromatography method with diode array detection (HPLC-DAD, able to accurately determine the melamine, fraudulently added, in protein supplements, commonly used from healthy adults to enhance exercise or sport performance. The validation strategy was intentionally oriented towards routine use and the reliability of the method rather than extreme performance. For this reason, validation by accuracy profile, including estimation of uncertainty, was chosen. This procedure, based on the concept of total error (bias + standard deviation, clearly showed that this method was able to determine melamine over the range of 0.05–3.0 mg Kg−1, selected by taking into account the maximum residue levels (MRLs proposed by European legislation to distinguish between the unavoidable background presence of melamine and unacceptable adulteration. The accuracy profile procedure established that at least 95% of the future results obtained with the proposed method would be within the ±15% acceptance limits of the validated HPLC-DAD method over the whole defined concentration range.

  9. Accuracy of age estimation methods from orthopantomograph in forensic odontology: a comparative study.

    Science.gov (United States)

    Khorate, Manisha M; Dinkar, A D; Ahmed, Junaid

    2014-01-01

    Changes related to chronological age are seen in both hard and soft tissue. A number of methods for age estimation have been proposed which can be classified in four categories, namely, clinical, radiological, histological and chemical analysis. In forensic odontology, age estimation based on tooth development is universally accepted method. The panoramic radiographs of 500 healthy Goan, Indian children (250 boys and 250 girls) aged between 4 and 22.1 years were selected. Modified Demirjian's method (1973/2004), Acharya AB formula (2011), Dr Ajit D. Dinkar (1984) regression equation, Foti and coworkers (2003) formula (clinical and radiological) were applied for estimation of age. The result of our study has shown that Dr Ajit D. Dinkar method is more accurate followed by Acharya Indian-specific formula. Furthermore, in this study by applying all these methods to one regional population, we have attempted to present dental age estimation methodology best suited for the Goan Indian population.

  10. The accuracy of parameters determined with the core-sampling method application to Voronoi tessellations

    CERN Document Server

    Doroshkevich, A G; Madsen, S; Doroshkevich, Andrei G.; Gottloeber, Stefan; Madsen, Soeren

    1996-01-01

    The large-scale matter distribution represents a complex network of structure elements such as voids, clusters, filaments, and sheets. This network is spanned by a point distribution. The global properties of the point process can be measured by different statistical methods, which, however, do not describe directly the structure elements. The morphology of structure elements is an important property of the point distribution. Here we apply the core-sampling method to various Voronoi tessellations. Using the core-sampling method we identify one- and two-dimensional structure elements (filaments and sheets) in these Voronoi tessellations and reconstruct their mean separation along random straight lines. We compare the results of the core-sampling method with the a priori known structure elements of the Voronoi tessellations under consideration and find good agreement between the expected and found structure parameters, even in the presence of substantial noise. We conclude that the core-sampling method is a po...

  11. First-principle modelling of forsterite surface properties: Accuracy of methods and basis sets.

    Science.gov (United States)

    Demichelis, Raffaella; Bruno, Marco; Massaro, Francesco R; Prencipe, Mauro; De La Pierre, Marco; Nestola, Fabrizio

    2015-07-15

    The seven main crystal surfaces of forsterite (Mg2 SiO4 ) were modeled using various Gaussian-type basis sets, and several formulations for the exchange-correlation functional within the density functional theory (DFT). The recently developed pob-TZVP basis set provides the best results for all properties that are strongly dependent on the accuracy of the wavefunction. Convergence on the structure and on the basis set superposition error-corrected surface energy can be reached also with poorer basis sets. The effect of adopting different DFT functionals was assessed. All functionals give the same stability order for the various surfaces. Surfaces do not exhibit any major structural differences when optimized with different functionals, except for higher energy orientations where major rearrangements occur around the Mg sites at the surface or subsurface. When dispersions are not accounted for, all functionals provide similar surface energies. The inclusion of empirical dispersions raises the energy of all surfaces by a nearly systematic value proportional to the scaling factor s of the dispersion formulation. An estimation for the surface energy is provided through adopting C6 coefficients that are more suitable than the standard ones to describe O-O interactions in minerals. A 2 × 2 supercell of the most stable surface (010) was optimized. No surface reconstruction was observed. The resulting structure and surface energy show no difference with respect to those obtained when using the primitive cell. This result validates the (010) surface model here adopted, that will serve as a reference for future studies on adsorption and reactivity of water and carbon dioxide at this interface.

  12. A Software for Space Analysis and Comparison of the Accuracy of Tooth Measurements by Digital and Manual Methods

    Directory of Open Access Journals (Sweden)

    Roeinpeikar SMM.

    2011-08-01

    Full Text Available Statement of Problems: Several methods have been presented for the prediction of mesiodistal width of the unerupted canines and premolars. Nowadays, application of digital methods is suggested in dental analysis in orthodontics. Purpose: The aim of this study was to design a software for space analysis and comparison of the accuracy of tooth measurements by digital and manual methods in an Iranian population.Material and Method: By using Delphi and C++ programming languages, a software was designed. After insertion of 2 dimensional scanned images of dental casts, the software can predict mesiodistal width of the unerupted canines and premolars by using 12-variable regression equations based on the incisors and first molars. After providing 2 dimensional images of 125 dental casts in permanent dentition (75 females and 50 males, the prediction accuracy of regression equations was investigated. By providing 2-dimensional images of dental casts in 50 patients with mixed dentition, the accuracy of dental measurements was evaluated through the designed software. Moreover, the time duration of manual and digital measurements was evaluated. Data was analyzed in SPSS, version 17, using paired sample t-test for comparing the manual and digital measurements and evaluation of interobserver and intraobserver errors.Results: Prediction of the width of the canines and premolars by the designed software was not significantly different from manual measurement of those teeth on dental casts with digital Caliper ( p >0.05. There were no significant differences between manual and digital measurement of mesiodistal width of the teeth ( p >0.05. Also, there were no significant differences between intra-observer and inter-observer measurements and the speed of measurements in digital and manual methods. However, the time duration and speed of space analysis with these two methods were significantly different.Conclusion: The designed software has a good accuracy in

  13. Accuracy, resolution, and computational complexity of a discontinuous Galerkin finite element method

    NARCIS (Netherlands)

    Ven, van der H.; Vegt, van der J.J.W.; Cockburn, B.; Karniadakis, G.E.; Shu, C.-W.

    2000-01-01

    This series contains monographs of lecture notes type, lecture course material, and high-quality proceedings on topics described by the term "computational science and engineering". This includes theoretical aspects of scientific computing such as mathematical modeling, optimization methods, discret

  14. Errors incurred in profile reconstruction and methods for increasing inversion accuracies for occultation type measurements

    Science.gov (United States)

    Gross, S. H.; Pirraglia, J. A.

    1972-01-01

    A method for augmenting the occultation experiment is described for slightly refractive media. This method which permits separation of the components of the gradient of refractivity, appears applicable to most of the planets for a major portion of their atmospheres and ionospheres. The analytic theory is given, and the results of numerical tests with a radially and angularly varying model of an ionosphere are discussed.

  15. Efficient 3D frequency response modeling with spectral accuracy by the rapid expansion method

    KAUST Repository

    Chu, Chunlei

    2012-07-01

    Frequency responses of seismic wave propagation can be obtained either by directly solving the frequency domain wave equations or by transforming the time domain wavefields using the Fourier transform. The former approach requires solving systems of linear equations, which becomes progressively difficult to tackle for larger scale models and for higher frequency components. On the contrary, the latter approach can be efficiently implemented using explicit time integration methods in conjunction with running summations as the computation progresses. Commonly used explicit time integration methods correspond to the truncated Taylor series approximations that can cause significant errors for large time steps. The rapid expansion method (REM) uses the Chebyshev expansion and offers an optimal solution to the second-order-in-time wave equations. When applying the Fourier transform to the time domain wavefield solution computed by the REM, we can derive a frequency response modeling formula that has the same form as the original time domain REM equation but with different summation coefficients. In particular, the summation coefficients for the frequency response modeling formula corresponds to the Fourier transform of those for the time domain modeling equation. As a result, we can directly compute frequency responses from the Chebyshev expansion polynomials rather than the time domain wavefield snapshots as do other time domain frequency response modeling methods. When combined with the pseudospectral method in space, this new frequency response modeling method can produce spectrally accurate results with high efficiency. © 2012 Society of Exploration Geophysicists.

  16. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    Science.gov (United States)

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  17. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    Directory of Open Access Journals (Sweden)

    Emma Wells

    Full Text Available To prevent transmission in Ebola Virus Disease (EVD outbreaks, it is recommended to disinfect living things (hands and people with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH, sodium dichloroisocyanurate (NaDCC, and sodium hypochlorite (NaOCl have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1 determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2 conducting volunteer testing to assess ease-of-use; and, 3 determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method, then DPD dilution methods (2.4-19% error, then test strips (5.2-48% error; precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources, and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed. Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration

  18. Accuracy evaluation of numerical methods used in state-of-the-art simulators for spiking neural networks.

    Science.gov (United States)

    Henker, Stephan; Partzsch, Johannes; Schüffny, René

    2012-04-01

    With the various simulators for spiking neural networks developed in recent years, a variety of numerical solution methods for the underlying differential equations are available. In this article, we introduce an approach to systematically assess the accuracy of these methods. In contrast to previous investigations, our approach focuses on a completely deterministic comparison and uses an analytically solved model as a reference. This enables the identification of typical sources of numerical inaccuracies in state-of-the-art simulation methods. In particular, with our approach we can separate the error of the numerical integration from the timing error of spike detection and propagation, the latter being prominent in simulations with fixed timestep. To verify the correctness of the testing procedure, we relate the numerical deviations to theoretical predictions for the employed numerical methods. Finally, we give an example of the influence of simulation artefacts on network behaviour and spike-timing-dependent plasticity (STDP), underlining the importance of spike-time accuracy for the simulation of STDP.

  19. A least square extrapolation method for improving solution accuracy of PDE computations

    CERN Document Server

    Garbey, M

    2003-01-01

    Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code v...

  20. The Impacts of Atmospheric Stability on the Accuracy of Wind Speed Extrapolation Methods

    Directory of Open Access Journals (Sweden)

    Jennifer F. Newman

    2014-01-01

    Full Text Available The building of utility-scale wind farms requires knowledge of the wind speed climatology at hub height (typically 80–100 m. As most wind speed measurements are taken at 10 m above ground level, efforts are being made to relate 10-m measurements to approximate hub-height wind speeds. One common extrapolation method is the power law, which uses a shear parameter to estimate the wind shear between a reference height and hub height. The shear parameter is dependent on atmospheric stability and should ideally be determined independently for different atmospheric stability regimes. In this paper, data from the Oklahoma Mesonet are used to classify atmospheric stability and to develop stability-dependent power law fits for a nearby tall tower. Shear exponents developed from one month of data are applied to data from different seasons to determine the robustness of the power law method. In addition, similarity theory-based methods are investigated as possible alternatives to the power law. Results indicate that the power law method performs better than similarity theory methods, particularly under stable conditions, and can easily be applied to wind speed data from different seasons. In addition, the importance of using co-located near-surface and hub-height wind speed measurements to develop extrapolation fits is highlighted.

  1. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  2. ACCURACY OF OSCILLOMETRIC (PETMAP™ AND DOPPLER METHODS TO INDIRECT MEASUREMENT OF BLOOD PRESSURE IN LAMBS

    Directory of Open Access Journals (Sweden)

    Carla Maria Vela Ulian

    2016-10-01

    Full Text Available Neonatal physiology has peculiarities inherent to the age group. The objective of this study was to monitor the systemic arterial pressure in lambs during neonatal period. We used 20 Ile de France lambs, from birth and at 7, 14, 21, 28 and 35 days of life. The following parameters were analyzed: heart rate (HR, systolic (SBP, diastolic (DBP, and average blood pressure (ABP by oscillometric method petMAP™, and SBP with Doppler. Invasive pressure validated indirect methods with the average 101.52 ± 12.04 mmHg. The averages with petMAP™ were as follows: HR (156.38 ± 37.46 bpm; DBP (63.80 ± 11.14 mmHg; ABP (81.58 ± 11.83 mmHg; SBP (112.48 ± 15.68 mmHg and SBP by Dobler (90.27 ± 12.11 mmHg. There was significant difference in HR and blood pressure among the moments. Indirect methods differ between each other in 12.30 mmHg (overestimation   of 11%. Comparing with the invasive methods, both overestimated values of 4% and 16% in   PAS, respectively, for Doppler and petMAP™. The results showed that the Doppler method has established a good relationship with the invasive one being useful for gauging the SBP. The oscillometric method requires larger studies to be used in small ruminants. Keywords: Doppler; lamb; neonatal period; oscillometric; systemic blood pressure.

  3. Improving the Accuracy and Scalability of Discriminative Learning Methods for Markov Logic Networks

    Science.gov (United States)

    2011-05-01

    each ex- ample is a verb and all of its semantic arguments in a sentence ( Carreras & Màrquez, 2005). In addition, each example does not contain a...Recently, Riedel (2008) proposed a more efficient method to solve the MPE inference problem called Cutting Plane In- ference (CPI), which does not require...replace the square 2-norm, wTw, on these formulations by the 1-norm, ||w||1 = ∑n i=1 |wi|, 18 Algorithm 2.1 Cutting- plane method for solving the “1

  4. Measurement of glomerular filtration rate in adults: accuracy of five single-sample plasma clearance methods

    DEFF Research Database (Denmark)

    Rehling, M; Rabøl, A

    1989-01-01

    After an intravenous injection of a tracer that is removed from the body solely by filtration in the kidneys, the glomerular filtration rate (GFR) can be determined from its plasma clearance. The method requires a great number of blood samples but collection of urine is not needed. In the present...

  5. Accuracy of Finite Difference Methods for Solution of the Transient Heat Conduction (Diffusion) Equation.

    Science.gov (United States)

    1983-02-01

    Anthony Ralston and Herbert S. Wilf. New York: John Wiley and Sons, Inc., 95-115, 1967. 15. Patankar, Suhas V.. Numerical Heat Transfer and Fluid Flow...Series in Computational Methods in Mechanics and Thermal Sciences. New York: McGraw- Hill Book Company, 1980. 16. Patankar, Suhas V. and B. R

  6. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing th

  7. Accuracy analysis of simplified and rigorous numerical methods applied to binary nanopatterning gratings in non-paraxial domain

    Science.gov (United States)

    Francés, Jorge; Bleda, Sergio; Gallego, Sergi; Neipp, Cristian; Márquez, Andrés; Pascual, Inmaculada; Beléndez, Augusto

    2013-11-01

    A set of simplified and rigorous electromagnetic vector theories is used for analyzing the transmittance characteristics of diffraction phase gratings. The scalar diffraction theory and the effective medium theory are validated with the exact results obtained via the rigorous coupled-wave theory and the finite-difference time-domain method. The effects of surface profile parameters and also the angle of incidence is demonstrated to be a limiting factor in the accuracy of these theories. Therefore, the error of both simplified theories is also analyzed in non-paraxial domain with the intention of establishing a specific range of validity for both simplified theories.

  8. Dynamic accuracy of GPS receivers for use in health research: a novel method to assess GPS accuracy in real-world settings

    DEFF Research Database (Denmark)

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study....... The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should...

  9. Accuracy and standardization of diagnostic methods for the detection of antibodies to citrullinated peptides

    Directory of Open Access Journals (Sweden)

    M. Tampoia

    2011-06-01

    Full Text Available Anti-citrullinated peptide antibodies (ACPA have a very high specificity for rheumatoid arthritis, much more than that of the rheumatoid factor. In addition, ACPA can be found in sera in the pre-clinical phase, are associated with more severe joint destruction and with higher disease activity. In recent years, keeping pace with new knowledge and with progress made in the antigenic composition of tests and in the characterization of immunogenic epitopes, many immunoenzymatic (ELISA methods of second and third generation have been produced and marketed commercially, and their use has spread among clinical laboratories. Today, completely automated methods are also available, which are easy to use and with a higher throughput, rendering the diagnostic utility of testing ever faster and more effective. This review takes into consideration the more important characteristics of the new ACPA-ELISA tests now commercially available, and also considers recent progress in standardizing test results.

  10. High accuracy computational methods for behavioral modeling of thick-film resistors at cryogenic temperatures

    Directory of Open Access Journals (Sweden)

    Balik Franciszek

    2016-03-01

    Full Text Available The aim of this work was to elaborate two-dimensional behavioral modeling method of thick-film resistors working in low-temperature conditions. The investigated resistors (made from 5 various resistive inks: 10 resistor coupons, each with 36 resistors with various dimensions, were measured automatically in a cryostat system. The low temperature was achieved in a nitrogen-helium continuous-flow cryostat. For nitrogen used as a freezing liquid the minimal temperature possible to achieve was equal to −195.85 °C (77.3 K. Mathematical model in the form of a multiplication of two polynomials was elaborated based on the above mentioned measurements. The first polynomial approximated temperature behavior of the normalized resistance, while the second one described the dependence of resistance on planar resistors dimensions. Special computational procedures for multidimensional approximation purpose were elaborated. It was shown that proper approximation polynomials and sufficiently exact methods of calculations ensure acceptable modeling errors.

  11. DIAGNOSTIC ACCURACY OF VARIOUS METHODS TO DETECT LYMPH NODE METASTASES IN ORAL SQUAMOUS CELL CARCINOMA

    Directory of Open Access Journals (Sweden)

    Priyanka

    2014-05-01

    Full Text Available The present study was undertaken with the purpose to compare the sensitivity of various methods for detection of lymph node metastases:Intra-operative frozen sections H & E staining, Conventional H & E staining on formalin fixed tissue, Serial –step sectioning by conventional H&E staining & Immunohistochemical staining by Pancytokeratin antibody. METHOD: The study included 80 consecutive cases of oral squamous cell carcinoma, who underwent radical neck dissection. The various level of lymph nodes in these cases were checked for metastases by 4 techniques i.e. intra-operative frozen sections H&E staining, conventional H&E staining on formalin fixed tissue, serial –step sectioning by conventional H & E staining &immunohistochemical staining by Pancytokeratin antibody. RESULTS: Considering IHC as a gold standard, we observed highest sensitivity & specificity for serial sectioning at 53.7%, & 98.9% when compared to intraoperative frozen section and conventional H&E which were 32.5%, 97.1% & 44.7%, 98.2% respectively. CONCLUSION: Thus we conclude that the most sensitive method to detect lymph node metastasis in case of Oral Squamous cell carcinoma is step serial section when considering IHC as a gold standard.

  12. Accuracy of conventional identification methods used for Enterobacteriaceae isolates in three Nigerian hospitals

    Science.gov (United States)

    Ogunlowo, Peter Oladejo; Olley, Mitsan; Springer, Burkhard; Allerberger, Franz; Ruppitsch, Werner

    2016-01-01

    Background Enterobacteriaceae are ubiquitously present in nature and can be found in the intestinal tract of humans and animals as commensal flora. Multidrug-resistant Enterobacteriaceae are increasingly reported and are a threat to public health implicating a need for accurate identification of the isolates to species level. In developing countries, identification of bacteria basically depends on conventional methods: culture and phenotypic methods that hamper the accurate identification of bacteria. In this study, matrix-assisted desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) technique was compared to conventional identification techniques. Materials and Methods In total, 147 Enterobacteriaceae isolates were collected from March to May 2015 from three medical microbiology laboratories of hospitals in Edo state, Nigeria, after being tested according to the individual laboratories standard operating procedures. All isolates were stored at −20°C until tested centrally by MALDI-TOF MS. Results One hundred and forty five (98.6%) isolates had a MALDI Biotyper best score > or =2.0, indicating a secure genus and probable species identification; and 2(1.36%) isolates had a best score or =2.0 comprised nine genera and 10 species, respectively. A total of 57.2% and 33.1% of isolates identified had agreement between MALDI-TOF MS and conventional techniques for identification at genus and species level, respectively, when analyzing bacteria with MALDI Biotyper best scores > or =2.0. Conclusion The results of our study show that the applied conventional identification techniques for Enterobacteriaceae in the investigated Nigerian hospitals are not very accurate. Use of state-of-the-art identification technologies for microorganisms is necessary to guarantee comparability of bacteriological results. PMID:27703855

  13. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies

    Energy Technology Data Exchange (ETDEWEB)

    McKechnie, Scott [Cavendish Laboratory, Department of Physics, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Booth, George H. [Theory and Simulation of Condensed Matter, King’s College London, The Strand, London WC2R 2LS (United Kingdom); Cohen, Aron J. [Department of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB2 1EW (United Kingdom); Cole, Jacqueline M., E-mail: jmc61@cam.ac.uk [Cavendish Laboratory, Department of Physics, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Argonne National Laboratory, 9700 S Cass Avenue, Argonne, Illinois 60439 (United States)

    2015-05-21

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density functional theory (DFT) and wave function methods: Hartree-Fock theory, second-order Møller-Plesset perturbation theory, and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionization energies obtained from total energy difference calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.

  14. Accuracy evaluation of both Wallace-Bott and BEM-based paleostress inversion methods

    Science.gov (United States)

    Lejri, Mostfa; Maerten, Frantz; Maerten, Laurent; Soliva, Roger

    2017-01-01

    Four decades after their introduction, the validity of fault slip inversion methods based on Wallace (1951) and Bott (1959) hypothesis, which states that the slip on each fault surface has the same direction and sense as the maximum resolved shear stress, is still a subject of debate. According to some authors, this hypothesis is questionable since fault mechanical interactions induce slip reorientations as confirmed by geomechanical models. This leads us to ask as to what extent the Wallace-Bott simplifications are reliable as a basis hypothesis for stress inversion from fault slip data. In this paper, we compare two inversion methods; the first is based on the Wallace-Bott hypothesis, and the second relies on geomechanics and mechanical effects on fault heterogeneous slip distribution. In that context, a multi-parametric stress inversion study covering (i) the friction coefficients (μ), (ii) the full range of Andersonian state of stress and (iii) slip data sampling along the faults is performed. For each tested parameter, the results of the mechanical stress inversion and the Wallace-Bott (WB) based stress inversion for slip are compared in order to understand their respective effects. The predicted discrepancy between the solutions of both stress inversion methods (based on WB and mechanics) will then be used to explain the stress inversions results for the chimney Rock case study. It is shown that a high solution discrepancy is not always correlated with the misfit angle (ω) and can be found under specific configurations (R-, θ, μ, geometry) invalidating the WB solutions. We conclude that in most cases the mechanical stress inversion and the WB based stress inversion are both valid and complementary depending on the fault friction. Some exceptions (i.e. low fault friction, simple fault geometry and pure regimes) that may lead to wrong WB based stress inversion solutions are highlighted.

  15. Method for high-accuracy reflectance measurements in the 2.5-microm region.

    Science.gov (United States)

    Richter, Rudolf; Müller, Andreas

    2003-02-20

    Reflectance measurement with spectroradiometers in the solar wavelength region (0.4-2.5 microm) are frequently conducted in the laboratory or in the field to characterize surface materials of artificial and natural targets. The spectral surface reflectance is calculated as the ratio of the signals obtained over the target surface and a reference panel, yielding a relative reflectance value. If the reflectance of the reference panel is known, the absolute target reflectance can be computed. This standard measurement technique assumes that the signal at the radiometer is due completely to reflected target and reference radiation. However, for field measurements in the 2.4-2.5-microm region with the Sun as the illumination source, the emitted thermal radiation is not a negligible part of the signal even at ambient temperatures, because the atmospheric transmittance, and thus the solar illumination level, is small in the atmospheric absorption regions. A new method is proposed that calculates reflectance values in the 2.4-2.5-microm region while it accounts for the reference panel reflectance and the emitted radiation. This technique needs instruments with noise-equivalent radiances of 2 orders of magnitude below currently commercially available instruments and requires measurement of the surface temperatures of target and reference. If the reference panel reflectance and temperature effects are neglected, the standard method yields reflectance errors up to 0.08 and 0.15 units for 7- and 2-nm bandwidth instruments, respectively. For the new method the corresponding errors can be reduced to approximately 0.01 units for the surface temperature range of 20-35 degrees C.

  16. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies

    Science.gov (United States)

    McKechnie, Scott; Booth, George H.; Cohen, Aron J.; Cole, Jacqueline M.

    2015-05-01

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density functional theory (DFT) and wave function methods: Hartree-Fock theory, second-order Møller-Plesset perturbation theory, and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionization energies obtained from total energy difference calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.

  17. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    Science.gov (United States)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  18. Bone QUS measurement performed under loading condition, a more accuracy ultrasound method for osteoporosis diagnosis.

    Science.gov (United States)

    Liu, Chengrui; Niu, Haijun; Fan, Yubo; Li, Deyu

    2012-10-01

    Osteoporosis is a worldwide health problem with enormous social and economic impact. Quantitative ultrasound (QUS) method provides comprehensive information on bone mass, microstructure and mechanical properties of the bone. And the cheap, safe and portable ultrasound equipment is more suitable for public health monitoring. QUS measurement was normally performed on bone specimens without mechanical loading. But human bones are subjected to loading during routine daily activities, and physical loading leads to the changes of bone microstructure and mechanical properties. We hypothesized that bone QUS parameters measured under loading condition differ from those measured without loading because the microstructure of bone was changed when loading subjected to bone. Furthermore, when loading was subjected on bone, the loading-lead microstructure change of osteoporosis bone may larger than that of health bone. By considering the high relationship between bone microstructure and QUS parameters, the QUS parameters of osteoporosis bone may changed larger than that of health bone. So osteoporosis may be detected more effectively by the combination of QUS method and mechanical loading.

  19. Improved Accuracy of Nonlinear Parameter Estimation with LAV and Interval Arithmetic Methods

    Directory of Open Access Journals (Sweden)

    Humberto Muñoz

    2009-06-01

    Full Text Available The reliable solution of nonlinear parameter es- timation problems is an important computational problem in many areas of science and engineering, including such applications as real time optimization. Its goal is to estimate accurate model parameters that provide the best fit to measured data, despite small- scale noise in the data or occasional large-scale mea- surement errors (outliers. In general, the estimation techniques are based on some kind of least squares or maximum likelihood criterion, and these require the solution of a nonlinear and non-convex optimiza- tion problem. Classical solution methods for these problems are local methods, and may not be reliable for finding the global optimum, with no guarantee the best model parameters have been found. Interval arithmetic can be used to compute completely and reliably the global optimum for the nonlinear para- meter estimation problem. Finally, experimental re- sults will compare the least squares, l2, and the least absolute value, l1, estimates using interval arithmetic in a chemical engineering application.

  20. Assessing accuracy of measurements for a Wingate Test using the Taguchi method.

    Science.gov (United States)

    Franklin, Kathryn L; Gordon, Rae S; Davies, Bruce; Baker, Julien S

    2008-01-01

    The purpose of this study was to establish the effects of four variables on the results obtained for a Wingate Anaerobic Test (WAnT). This study used a 30 second WAnT and compared data collection and analysed in different ways in order to form conclusions as to the relative importance of the variables on the results. Data was collected simultaneously by a commercially available software correction system manufactured by Cranlea Ltd., (Birmingham, England) system and an alternative method of data collection which involves the direct measurement of the flywheel velocity and the brake force. Data was compared using a design of experiments technique, the Taguchi method. Four variables were examined - flywheel speed, braking force, moment of inertia of the flywheel, and time intervals over which the work and power were calculated. The choice of time interval was identified as the most influential variable on the results. While the other factors have an influence on the results, the decreased time interval over which the data is averaged gave 9.8% increase in work done, 40.75% increase in peak power and 13.1% increase in mean power.

  1. Trajectory Control:Directional MWD Inversely New Wellbore Positioning Accuracy Prediction Method

    Institute of Scientific and Technical Information of China (English)

    Ahmed Abd Alaziz Ibrahim; Tagwa Ahmed Musa

    2004-01-01

    The deviation control of directional drilling is essentially the controlling of two angles of the wellbore actually drilled, namely, the inclination and azimuth. In directional drilling the bit trajectory never coincides exactly with the planned path, which is usually a plane curve with straight, building, holding, and dropping sections in succession. The drilling direction is of course dependant on the direction of the resultant forces acting on the bit and it is quite a tough job to hit the optimum target at the hole bottom as required. The traditional passive methods for correcting the drilling path have not met the demand to improve the techniques of deviation control. A method for combining wellbore surveys to obtain a composite, more accurate well position relies on accepting the position of the well from the most accurate survey instrument used in a given section of the wellbore. The error in each position measurement is the sum of many independent root sources of error effects. The relationship between surveys and other influential factors is considered, along with an analysis of different points of view. The collaborative work describes, establishes a common starting point of wellbore position uncertainty model, definition of what constitutes an error model, mathematics of position uncertainty calculation and an error model for basic directional service.

  2. Accuracy of diagnostic methods and surveillance sensitivity for human enterovirus, South Korea, 1999-2011.

    Science.gov (United States)

    Hyeon, Ji-Yeon; Hwang, Seoyeon; Kim, Hyejin; Song, Jaehyoung; Ahn, Jeongbae; Kang, Byunghak; Kim, Kisoon; Choi, Wooyoung; Chung, Jae Keun; Kim, Cheon-Hyun; Cho, Kyungsoon; Jee, Youngmee; Kim, Jonghyun; Kim, Kisang; Kim, Sun-Hee; Kim, Min-Ji; Cheon, Doo-Sung

    2013-08-01

    The epidemiology of enteroviral infection in South Korea during 1999-2011 chronicles nationwide outbreaks and changing detection and subtyping methods used over the 13-year period. Of 14,657 patients whose samples were tested, 4,762 (32.5%) samples were positive for human enterovirus (human EV); as diagnostic methods improved, the rate of positive results increased. A seasonal trend of outbreaks was documented. Genotypes enterovirus 71, echovirus 30, coxsackievirus B5, enterovirus 6, and coxsackievirus B2 were the most common genotypes identified. Accurate test results correlated clinical syndromes to enterovirus genotypes: aseptic meningitis to echovirus 30, enterovirus 6, and coxsackievirus B5; hand, foot and mouth disease to coxsackievirus A16; and hand, foot and mouth disease with neurologic complications to enterovirus 71. There are currently no treatments specific to human EV infections; surveillance of enterovirus infections such as this study provides may assist with evaluating the need to research and develop treatments for infections caused by virulent human EV genotypes.

  3. MT1 and MT2 Melatonin Receptors: A Therapeutic Perspective.

    Science.gov (United States)

    Liu, Jiabei; Clough, Shannon J; Hutchinson, Anthony J; Adamah-Biassi, Ekue B; Popovska-Gorevski, Marina; Dubocovich, Margarita L

    2016-01-01

    Melatonin, or 5-methoxy-N-acetyltryptamine, is synthesized and released by the pineal gland and locally in the retina following a circadian rhythm, with low levels during the day and elevated levels at night. Melatonin activates two high-affinity G protein-coupled receptors, termed MT1 and MT2, to exert beneficial actions in sleep and circadian abnormality, mood disorders, learning and memory, neuroprotection, drug abuse, and cancer. Progress in understanding the role of melatonin receptors in the modulation of sleep and circadian rhythms has led to the discovery of a novel class of melatonin agonists for treating insomnia, circadian rhythms, mood disorders, and cancer. This review describes the pharmacological properties of a slow-release melatonin preparation (i.e., Circadin®) and synthetic ligands (i.e., agomelatine, ramelteon, tasimelteon), with emphasis on identifying specific therapeutic effects mediated through MT1 and MT2 receptor activation. Discovery of selective ligands targeting the MT1 or the MT2 melatonin receptors may promote the development of novel and more efficacious therapeutic agents.

  4. On the accuracy of analytical methods for turbulent flows near smooth walls

    Science.gov (United States)

    Absi, Rafik; Di Nucci, Carmine

    2012-09-01

    This Note presents two methods for mean streamwise velocity profiles of fully-developed turbulent pipe and channel flows near smooth walls. The first is the classical approach where the mean streamwise velocity is obtained by solving the momentum equation with an eddy viscosity formulation [R. Absi, A simple eddy viscosity formulation for turbulent boundary layers near smooth walls, C. R. Mecanique 337 (2009) 158-165]. The second approach presents a formulation of the velocity profile based on an analogy with an electric field distribution [C. Di Nucci, E. Fiorucci, Mean velocity profiles of fully-developed turbulent flows near smooth walls, C. R. Mecanique 339 (2011) 388-395] and a formulation for the turbulent shear stress. However, this formulation for the turbulent shear stress shows a weakness. A corrected formulation is presented. Comparisons with DNS data show that the classical approach with the eddy viscosity formulation provides more accurate profiles for both turbulent shear stress and velocity gradient.

  5. Accuracy of dementia diagnosis—a direct comparison between radiologists and a computerized method

    Science.gov (United States)

    Stonnington, Cynthia M.; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D.; Mader, Irina; Mitchell, L. Anne; Patel, Ameet C.; Roberts, Catherine C.; Fox, Nick C.; Jack, Clifford R.; Ashburner, John; Frackowiak, Richard S. J.

    2008-01-01

    There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65–95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice. PMID:18835868

  6. Accuracy of dementia diagnosis: a direct comparison between radiologists and a computerized method.

    Science.gov (United States)

    Klöppel, Stefan; Stonnington, Cynthia M; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D; Mader, Irina; Mitchell, L Anne; Patel, Ameet C; Roberts, Catherine C; Fox, Nick C; Jack, Clifford R; Ashburner, John; Frackowiak, Richard S J

    2008-11-01

    There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65-95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice.

  7. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings

    Science.gov (United States)

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal. PMID:24653984

  8. A new TEC interpolation method based on the least squares collocation for high accuracy regional ionospheric maps

    Science.gov (United States)

    Krypiak-Gregorczyk, Anna; Wielgosz, Paweł; Jarmołowski, Wojciech

    2017-04-01

    The ionosphere plays a crucial role in space weather that affects satellite navigation as the ionospheric delay is one of the major errors in GNSS. On the other hand, GNSS observations are widely used to determine the amount of ionospheric total electron content (TEC). An important aspect in the electron content estimation at regional and global scale is adopting the appropriate interpolation strategy. In this paper we propose and validate a new method for regional TEC modeling based on least squares collocation (LSC) with noise variance estimation. This method allows for providing accurate TEC maps with high spatial and temporal resolution. Such maps may be used to support precise GNSS positioning and navigation, e.g. in RTK mode and also in the ionosphere studies. To test applicability of new TEC maps to positioning, double-difference ionospheric corrections were derived from the maps and their accuracy was analyzed. In addition, the corrections were applied to GNSS positioning and validated in ambiguity resolution domain. The tests were carried out during a strong ionospheric storm when the ionosphere is particularly difficult to model. The performance of the new approach was compared to IGS and UPC global, and CODE regional TEC maps. The results showed an advantage of our solution with resulting accuracy of the relative ionospheric corrections usually better than 10 cm, even during the ionospheric disturbances. This proves suitability of our regional TEC maps for, e.g. supporting fast ambiguity resolution in kinematic GNSS positioning.

  9. Antisense sequencing improves the accuracy and precision of A-to-I editing measurements using the peak height ratio method

    Directory of Open Access Journals (Sweden)

    Rinkevich Frank D

    2012-01-01

    Full Text Available Abstract Background A-to-I RNA editing is found in all phyla of animals and contributes to transcript diversity that may have profound impacts on behavior and physiology. Many transcripts of genes involved in axonal conductance, synaptic transmission and modulation are the targets of A-to-I RNA editing. There are a number of methods to measure the extent of A-to-I RNA editing, but they are generally costly and time consuming. One way to determine the frequency of A-to-I RNA editing is the peak height ratio method, which compares the size of peaks on electropherograms that represent unedited and edited sites. Findings Sequencing of 4 editing sites of the Dα6 nicotinic acetylcholine receptor subunit with an antisense primer (which uses T/C peaks to measure unedited and edited sites, respectively showed very accurate and precise measurements of A-to-I RNA editing. The accuracy and precision were excellent for all editing sites, including those edited with high or low frequencies. The frequency of A-to-I RNA editing was comparable to the editing frequency as measured by clone counting from the same sample. Sequencing these same sites with the sense primer (which uses A/G peaks yielded inaccurate and imprecise measurements. Conclusions We have validated and improved the accuracy and precision of the peak height ratio method to measure the frequency of A-to-I RNA editing, and shown that results are primer specific. Thus, the correct sequencing primer must be utilized for the most dependable data. When compared to other methods used to measure the frequency of A-to-I RNA editing, the major benefits of the peak height ratio are that this method is inexpensive, fast, non-labor intensive and easily adaptable to many laboratory and field settings.

  10. Modeling reaction noise with a desired accuracy by using the X level approach reaction noise estimator (XARNES) method.

    Science.gov (United States)

    Konkoli, Zoran

    2012-07-21

    A novel computational method for modeling reaction noise characteristics has been suggested. The method can be classified as a moment closure method. The approach is based on the concept of correlation forms which are used for describing spatially extended many body problems where particle numbers change in space and time. In here, it was shown how the formalism of spatially extended correlation forms can be adapted to study well mixed reaction systems. Stochastic fluctuations in particle numbers are described by selectively capturing correlation effects up to the desired order, ξ. The method is referred to as the ξ-level Approximation Reaction Noise Estimator method (XARNES). For example, the ξ=1 description is equivalent to the mean field theory (first-order effects), the ξ=2 case corresponds to the previously developed PARNES method (pair effects), etc. The main idea is that inclusion of higher order correlation effects should lead to better (more accurate) results. Several models were used to test the method, two versions of a simple complex formation model, the Michaelis-Menten model of enzymatic kinetics, the smallest bistable reaction network, a gene expression network with negative feedback, and a random large network. It was explicitly demonstrated that increase in ξ indeed improves accuracy in all cases investigated. The approach has been implemented as automatic software using the Mathematica programming language. The user only needs to input reaction rates, stoichiometry coefficients, and the desired level of computation ξ.

  11. A new 100-m Digital Elevation Model of the Antarctic Peninsula derived from ASTER Global DEM: methods and accuracy assessment

    Directory of Open Access Journals (Sweden)

    A. J. Cook

    2012-10-01

    Full Text Available A high resolution surface topography Digital Elevation Model (DEM is required to underpin studies of the complex glacier system on the Antarctic Peninsula. A complete DEM with better than 200 m pixel size and high positional and vertical accuracy would enable mapping of all significant glacial basins and provide a dataset for glacier morphology analyses. No currently available DEM meets these specifications. We present a new 100-m DEM of the Antarctic Peninsula (63–70° S, based on ASTER Global Digital Elevation Model (GDEM data. The raw GDEM products are of high-quality on the rugged terrain and coastal-regions of the Antarctic Peninsula and have good geospatial accuracy, but they also contain large errors on ice-covered terrain and we seek to minimise these artefacts. Conventional data correction techniques do not work so we have developed a method that significantly improves the dataset, smoothing the erroneous regions and hence creating a DEM with a pixel size of 100 m that will be suitable for many glaciological applications. We evaluate the new DEM using ICESat-derived elevations, and perform horizontal and vertical accuracy assessments based on GPS positions, SPOT-5 DEMs and the Landsat Image Mosaic of Antarctica (LIMA imagery. The new DEM has a mean elevation difference of −4 m (± 25 m RMSE from ICESat (compared to −13 m mean and ±97 m RMSE for the original ASTER GDEM, and a horizontal error of less than 2 pixels, although elevation accuracies are lower on mountain peaks and steep-sided slopes. The correction method significantly reduces errors on low relief slopes and therefore the DEM can be regarded as suitable for topographical studies such as measuring the geometry and ice flow properties of glaciers on the Antarctic Peninsula. The DEM is available for download from the NSIDC website: http://nsidc.org/data/nsidc-0516.html (ACCURACY OF MILK YIELD ESTIMATION IN DAIRY CATTLE FROM MONTHLY RECORD BY REGRESSION METHOD

    Directory of Open Access Journals (Sweden)

    I.S. Kuswahyuni

    2014-10-01

    Full Text Available This experiment was conducted to estimate the actual milk yield and to compare the estimation accuracyof cumulative monthly record to actual milk yield by regression method. Materials used in this experimentwere records relating to milk yield and pedigree. The obtained data were categorized into 2 groups i.e. AgeGroup I (AG I that was cow calving at < 36 months old as many as 33 cows with 33 lactation records andAG II that cows calving e” 36 months old as many as 44 cows with 105 lactation records. The first three toseven months data were used to estimate actual milk yield. Results showed that mean of milk yield/ head/lactation at AG I (2479.5 ± 461.5 kg was lower than that of AG II (2989,7 ± 526,8 kg. Estimated milk yieldsfor three to seven months at AG I were 2455.6±419.7; 2455.7±432.9; 2455.5±446.4; 2455.6±450.8; 2455,64± 450,8; 2455,5 ± 459,3 kg respectively, meanwhile at AG II was 2972.3±479.8; 2972.0±497.2; 2972.4±509.6;2972.5±523.6 and 2972.5±535.1 respectively. Correlation coefficients between estimated and actual milkyield at AG I were 0.79; 0.82; 0.86; 0.86 and 0.88, respectively, meanwhile at AG II were 0.65; 0.66; 0.67;0.69 and 0.72 respectively. In conclusion, the mean of estimated milk yield at AG I was lower than AG II.The best record to estimate actual milk yield both at AG I and AG II were the seven cumulative months.

  12. Identifying the procedural gap and improved methods for maintaining accuracy during total hip arthroplasty.

    Science.gov (United States)

    Gross, Allan; Muir, Jeffrey M

    2016-09-01

    Osteoarthritis is a ubiquitous condition, affecting 26 million Americans each year, with up to 17% of adults over age 75 suffering from one variation of arthritis. The hip is one of the most commonly affected joints and while there are conservative options for treatment, as symptoms progress, many patients eventually turn to surgery to manage their pain and dysfunction. Early surgical options such as osteotomy or arthroscopy are reserved for younger, more active patients with less severe disease and symptoms. Total hip arthroplasty offers a viable solution for patients with severe degenerative changes; however, post-surgical discrepancies in leg length, offset and component malposition are common and cause significant complications. Such discrepancies are associated with consequences such as low back pain, neurological deficits, instability and overall patient dissatisfaction. Current methods for managing leg length and offset during hip arthroplasty are either inaccurate and susceptible to error or are cumbersome, expensive and lengthen surgical time. There is currently no viable option that provides accurate, real-time data to surgeons regarding leg length, offset and cup position in a cost-effective manner. As such, we hypothesize that a procedural gap exists in hip arthroplasty, a gap into which fall a large majority of arthroplasty patients who are at increased risk of complications following surgery. These complications and associated treatments place significant stress on the healthcare system. The costs associated with addressing leg length and offset discrepancies can be minor, requiring only heel lifts and short-term rehabilitation, but can also be substantial, with revision hip arthroplasty costs of up to $54,000 per procedure. The need for a cost-effective, simple to use and unobtrusive technology to address this procedural gap in hip arthroplasty and improve patient outcomes is of increasing importance. Given the aging of the population, the projected

  13. Optics of the human cornea influence the accuracy of stereo eye-tracking methods: a simulation study.

    Science.gov (United States)

    Barsingerhorn, A D; Boonstra, F N; Goossens, H H L M

    2017-02-01

    Current stereo eye-tracking methods model the cornea as a sphere with one refractive surface. However, the human cornea is slightly aspheric and has two refractive surfaces. Here we used ray-tracing and the Navarro eye-model to study how these optical properties affect the accuracy of different stereo eye-tracking methods. We found that pupil size, gaze direction and head position all influence the reconstruction of gaze. Resulting errors range between ± 1.0 degrees at best. This shows that stereo eye-tracking may be an option if reliable calibration is not possible, but the applied eye-model should account for the actual optics of the cornea.

  14. Machine learning methods for empirical streamflow simulation: a comparison of model accuracy, interpretability, and uncertainty in seasonal watersheds

    Science.gov (United States)

    Shortridge, Julie E.; Guikema, Seth D.; Zaitchik, Benjamin F.

    2016-07-01

    In the past decade, machine learning methods for empirical rainfall-runoff modeling have seen extensive development and been proposed as a useful complement to physical hydrologic models, particularly in basins where data to support process-based models are limited. However, the majority of research has focused on a small number of methods, such as artificial neural networks, despite the development of multiple other approaches for non-parametric regression in recent years. Furthermore, this work has often evaluated model performance based on predictive accuracy alone, while not considering broader objectives, such as model interpretability and uncertainty, that are important if such methods are to be used for planning and management decisions. In this paper, we use multiple regression and machine learning approaches (including generalized additive models, multivariate adaptive regression splines, artificial neural networks, random forests, and M5 cubist models) to simulate monthly streamflow in five highly seasonal rivers in the highlands of Ethiopia and compare their performance in terms of predictive accuracy, error structure and bias, model interpretability, and uncertainty when faced with extreme climate conditions. While the relative predictive performance of models differed across basins, data-driven approaches were able to achieve reduced errors when compared to physical models developed for the region. Methods such as random forests and generalized additive models may have advantages in terms of visualization and interpretation of model structure, which can be useful in providing insights into physical watershed function. However, the uncertainty associated with model predictions under extreme climate conditions should be carefully evaluated, since certain models (especially generalized additive models and multivariate adaptive regression splines) become highly variable when faced with high temperatures.

  15. Assessing the Accuracy of Two Enhanced Sampling Methods Using EGFR Kinase Transition Pathways: The Influence of Collective Variable Choice.

    Science.gov (United States)

    Pan, Albert C; Weinreich, Thomas M; Shan, Yibing; Scarpazza, Daniele P; Shaw, David E

    2014-07-01

    Structurally elucidating transition pathways between protein conformations gives deep mechanistic insight into protein behavior but is typically difficult. Unbiased molecular dynamics (MD) simulations provide one solution, but their computational expense is often prohibitive, motivating the development of enhanced sampling methods that accelerate conformational changes in a given direction, embodied in a collective variable. The accuracy of such methods is unclear for complex protein transitions, because obtaining unbiased MD data for comparison is difficult. Here, we use long-time scale, unbiased MD simulations of epidermal growth factor receptor kinase deactivation as a complex biological test case for two widely used methods-steered molecular dynamics (SMD) and the string method. We found that common collective variable choices, based on the root-mean-square deviation (RMSD) of the entire protein, prevented the methods from producing accurate paths, even in SMD simulations on the time scale of the unbiased transition. Using collective variables based on the RMSD of the region of the protein known to be important for the conformational change, however, enabled both methods to provide a more accurate description of the pathway in a fraction of the simulation time required to observe the unbiased transition.

  16. Novel molecular and computational methods improve the accuracy of insertion site analysis in Sleeping Beauty-induced tumors.

    Directory of Open Access Journals (Sweden)

    Benjamin T Brett

    Full Text Available The recent development of the Sleeping Beauty (SB system has led to the development of novel mouse models of cancer. Unlike spontaneous models, SB causes cancer through the action of mutagenic transposons that are mobilized in the genomes of somatic cells to induce mutations in cancer genes. While previous methods have successfully identified many transposon-tagged mutations in SB-induced tumors, limitations in DNA sequencing technology have prevented a comprehensive analysis of large tumor cohorts. Here we describe a novel method for producing genetic profiles of SB-induced tumors using Illumina sequencing. This method has dramatically increased the number of transposon-induced mutations identified in each tumor sample to reveal a level of genetic complexity much greater than previously appreciated. In addition, Illumina sequencing has allowed us to more precisely determine the depth of sequencing required to obtain a reproducible signature of transposon-induced mutations within tumor samples. The use of Illumina sequencing to characterize SB-induced tumors should significantly reduce sampling error that undoubtedly occurs using previous sequencing methods. As a consequence, the improved accuracy and precision provided by this method will allow candidate cancer genes to be identified with greater confidence. Overall, this method will facilitate ongoing efforts to decipher the genetic complexity of the human cancer genome by providing more accurate comparative information from Sleeping Beauty models of cancer.

  17. System Accuracy Evaluation of Four Systems for Self-Monitoring of Blood Glucose Following ISO 15197 Using a Glucose Oxidase and a Hexokinase-Based Comparison Method.

    Science.gov (United States)

    Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido

    2015-04-14

    The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.

  18. Using the significant dust deposition event on the glaciers of Mt. Elbrus, Caucasus Mountains, Russia on 5 May 2009 to develop a method for dating and

    Directory of Open Access Journals (Sweden)

    G. Nosenko

    2013-02-01

    Full Text Available A significant desert dust deposition event occurred on Mt. Elbrus, Caucasus Mountains, Russia on 5 May 2009, where the deposited dust later appeared as a brown layer in the snow pack. An examination of dust transportation history and analysis of chemical and physical properties of the deposited dust were used to develop a new approach for high-resolution "provenancing" of dust deposition events recorded in snow pack using multiple independent techniques. A combination of SEVIRI red-green-blue composite imagery, MODIS atmospheric optical depth fields derived using the Deep Blue algorithm, air mass trajectories derived with HYSPLIT model and analysis of meteorological data enabled identification of dust source regions with high temporal (hours and spatial (ca. 100 km resolution. Dust, deposited on 5 May 2009, originated in the foothills of the Djebel Akhdar in eastern Libya where dust sources were activated by the intrusion of cold air from the Mediterranean Sea and Saharan low pressure system and transported to the Caucasus along the eastern Mediterranean coast, Syria and Turkey. Particles with an average diameter below 8 μm accounted for 90% of the measured particles in the sample with a mean of 3.58 μm, median 2.48 μm. The chemical signature of this long-travelled dust was significantly different from the locally-produced dust and close to that of soils collected in a palaeolake in the source region, in concentrations of hematite. Potential addition of dust from a secondary source in northern Mesopotamia introduced uncertainty in the "provenancing" of dust from this event. Nevertheless, the approach adopted here enables other dust horizons in the snowpack to be linked to specific dust transport events recorded in remote sensing and meteorological data archives.

  19. Dental and dental hygiene students' diagnostic accuracy in oral radiology: effect of diagnostic strategy and instructional method.

    Science.gov (United States)

    Baghdady, Mariam T; Carnahan, Heather; Lam, Ernest W N; Woods, Nicole N

    2014-09-01

    There has been much debate surrounding diagnostic strategies and the most appropriate training models for novices in oral radiology. It has been argued that an analytic approach, using a step-by-step analysis of the radiographic features of an abnormality, is ideal. Alternative research suggests that novices can successfully employ non-analytic reasoning. Many of these studies do not take instructional methodology into account. This study evaluated the effectiveness of non-analytic and analytic strategies in radiographic interpretation and explored the relationship between instructional methodology and diagnostic strategy. Second-year dental and dental hygiene students were taught four radiographic abnormalities using basic science instructions or a step-by-step algorithm. The students were tested on diagnostic accuracy and memory immediately after learning and one week later. A total of seventy-three students completed both immediate and delayed sessions and were included in the analysis. Students were randomly divided into two instructional conditions: one group provided a diagnostic hypothesis for the image and then identified specific features to support it, while the other group first identified features and then provided a diagnosis. Participants in the diagnosis-first condition (non-analytic reasoning) had higher diagnostic accuracy then those in the features-first condition (analytic reasoning), regardless of their learning condition. No main effect of learning condition or interaction with diagnostic strategy was observed. Educators should be mindful of the potential influence of analytic and non-analytic approaches on the effectiveness of the instructional method.

  1. A three axis turntable's online initial state measurement method based on the high-accuracy laser gyro SINS

    Science.gov (United States)

    Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu

    2016-10-01

    As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.

  2. Improving the accuracy of simulation of radiation-reaction effects with implicit Runge-Kutta-Nyström methods.

    Science.gov (United States)

    Elkina, N V; Fedotov, A M; Herzing, C; Ruhl, H

    2014-05-01

    The Landau-Lifshitz equation provides an efficient way to account for the effects of radiation reaction without acquiring the nonphysical solutions typical for the Lorentz-Abraham-Dirac equation. We solve the Landau-Lifshitz equation in its covariant four-vector form in order to control both the energy and momentum of radiating particles. Our study reveals that implicit time-symmetric collocation methods of the Runge-Kutta-Nyström type are superior in accuracy and better at maintaining the mass-shell condition than their explicit counterparts. We carry out an extensive study of numerical accuracy by comparing the analytical and numerical solutions of the Landau-Lifshitz equation. Finally, we present the results of the simulation of particle scattering by a focused laser pulse. Due to radiation reaction, particles are less capable of penetrating into the focal region compared to the case where radiation reaction is neglected. Our results are important for designing forthcoming experiments with high intensity laser fields.

  3. Effect of two relining method on the dimensional accuracy of posterior palatal seal: an in viro study

    Directory of Open Access Journals (Sweden)

    Sajjadi Farnaz Sadat

    2015-05-01

    Full Text Available Background and Aims: Posterior palatal seal is one of the most important areas to support maxillary complete denture. The aims of this study was to evaluate the dimensional accuracy of both direct and indirect relining methods on the maxillary posterior palatal seal area.   Materials and Methods: A maxillary edentulous model was selected. A 1.5 mm layer of wax was placed on the model to create a space for relining material and impression was done by the silicone material and 20 casts was prepared. By putting the wax on the casts and performed of flasking, 20 dentures were prepared. Direct reline method (as Chair side with GC reline and indirect method (with Acrosoft-TC and firing with Acropars 100 were evaluated. The relined bases were put on the model and the spaces between them in five points (two points on the ridge , two points in the deepest part of palate and one point in the middle of palate were calculated by the Stereo microscope and each calculations was repeated 5 times and the mean dimensional changes was measured. To compare the groups, data were analyzed using multivariate analysis.   Results: The gap of P.P.S area was between 740.86 and 2356.49 . Direct method (1011.81±60.56 had a lesser gap in comparison with indirect (2056.8±13.13, and both method showed a significant statistic variance (P<0.0001.   Conclusion: Direct method showed a lesser gap in comparison with indirect method. By direct method adaptation of the denture in the P.P.S area would be better.

  4. Morphometric measurements of dragonfly wings: the accuracy of pinned, scanned and detached measurement methods.

    Science.gov (United States)

    Johnson, Laura; Mantle, Beth L; Gardner, Janet L; Backwell, Patricia R Y

    2013-01-01

    Large-scale digitization of museum specimens, particularly of insect collections, is becoming commonplace. Imaging increases the accessibility of collections and decreases the need to handle individual, often fragile, specimens. Another potential advantage of digitization is to make it easier to conduct morphometric analyses, but the accuracy of such methods needs to be tested. Here we compare morphometric measurements of scanned images of dragonfly wings to those obtained using other, more traditional, methods. We assume that the destructive method of removing and slide-mounting wings provides the most accurate method of measurement because it eliminates error due to wing curvature. We show that, for dragonfly wings, hand measurements of pinned specimens and digital measurements of scanned images are equally accurate relative to slide-mounted hand measurements. Since destructive slide-mounting is unsuitable for museum collections, and there is a risk of damage when hand measuring fragile pinned specimens, we suggest that the use of scanned images may also be an appropriate method to collect morphometric data from other collected insect species.

  5. Novel Scalable 3-D MT Inverse Solver

    Science.gov (United States)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  6. Analysis of MT-45, a Novel Synthetic Opioid, in Human Whole Blood by LC-MS-MS and its Identification in a Drug-Related Death.

    Science.gov (United States)

    Papsun, Donna; Krywanczyk, Alison; Vose, James C; Bundock, Elizabeth A; Logan, Barry K

    2016-05-01

    MT-45 (1-cyclohexyl-4-(1,2-diphenylethyl)piperazine) is just one of the many novel psychoactive substances (NPS) to have reached the recreational drug market in the twenty-first century; it is however, one of the first designer opioids to achieve some degree of popularity, in a market currently dominated by synthetic cannabinoids and designer stimulants. A single fatality involving MT-45 and etizolam is described. A method for the quantitation of MT-45 in whole blood using liquid chromatography-tandem mass spectrometry was developed and validated. The linear range was determined to be 1.0-100 ng/mL with a detection limit of 1.0 ng/mL, and the method met the requirements for acceptable linearity, precision and accuracy. After analyzing the sample on dilution and by standard addition, the concentration of MT-45 in the decedent's blood was determined to be 520 ng/mL, consistent with other concentrations of MT-45 reported in drug-related fatalities. Etizolam was present at a concentration of 35 ng/mL. This case illustrates the importance of considering non-traditional drugs in unexplained apparent drug-related deaths. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Jan Hesthaven

    2012-02-06

    Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.

  8. FLOTAC for the diagnosis of Hymenolepis spp. infection: proof-of-concept and comparing diagnostic accuracy with other methods.

    Science.gov (United States)

    Steinmann, Peter; Cringoli, Giuseppe; Bruschi, Fabrizio; Matthys, Barbara; Lohourignon, Laurent K; Castagna, Barbara; Maurelli, Maria P; Morgoglione, Maria E; Utzinger, Jürg; Rinaldi, Laura

    2012-08-01

    Hymenolepis nana is the most common cestode parasitizing humans, yet it is under-diagnosed. We determined the optimal flotation solution (FS) for the diagnosis of this intestinal parasite with the FLOTAC method, and compared its diagnostic accuracy with an ether-concentration technique and the Kato-Katz method. Zinc sulphate (specific gravity 1.20) proved to be the best-performing FS. Using this FS, we detected 65 H. nana infections among 234 fixed fecal samples from Tajik and Sahrawi children (prevalence 27.8 %). The ether-concentration technique detected 40 infections (prevalence 17.1 %) in the same samples. Considering the combined results as a reference, the sensitivities of FLOTAC and ether-concentration were 95.6 % and 58.8 %, respectively. The Kato-Katz method resulted in a prevalence of only 8.7 %. In terms of eggs per gram of stool, a significantly (P Hymenolepis diminuta infections in 302 fecal samples, whereas five samples were found positive with the Kato-Katz technique. We conclude that FLOTAC is an accurate coprodiagnostic technique for H. nana and H. diminuta, two species which join a growing list of intestinal parasites that can be reliably diagnosed by this technique.

  9. Accuracy of Subjective Performance Appraisal is Not Modulated by the Method Used by the Learner During Motor Skill Acquisition.

    Science.gov (United States)

    Patterson, Jae T; McRae, Matthew; Lai, Sharon

    2016-04-01

    The present experiment examined whether the method of subjectively appraising motor performance during skill acquisition would differentially strengthen performance appraisal capabilities and subsequent motor learning. Thirty-six participants (18 men and 18 women; M age = 20.8 years, SD = 1.0) learned to execute a serial key-pressing task at a particular overall movement time (2550 ms). Participants were randomly separated into three groups: the Generate group estimated their overall movement time then received knowledge of results of their actual movement time; the Choice group selected their perceived movement time from a list of three alternatives; the third group, the Control group, did not self-report their perceived movement time and received knowledge of results of their actual movement time on every trial. All groups practiced 90 acquisition trials and 30 no knowledge of results trials in a delayed retention test. Results from the delayed retention test showed that both methods of performance appraisal (Generate and Choice) facilitated superior motor performance and greater accuracy in assessing their actual motor performance compared with the control condition. Therefore, the processing required for accurate appraisal of performance was strengthened, independent of performance appraisal method.

  10. Three optimized and validated (using accuracy profiles) LC methods for the determination of pentamidine and new analogs in rat plasma.

    Science.gov (United States)

    Hambÿe, S; Stanicki, D; Colet, J-M; Aliouat, E M; Vanden Eynde, J J; Blankert, B

    2011-01-15

    Three novel LC-UV methods for the determination of pentamidine (PTMD) and two of its new analogs in rat plasma are described. The chromatographic conditions (wavelength, acetonitrile percentage in the mobile phase, internal standard) were optimized to have an efficient selectivity. A pre-step of extraction was simultaneously developed for each compound. For PTMD, a solid phase extraction (SPE) with Oasis(®) HLB cartridges was selected, while for the analogs we used protein precipitation with acetonitrile. SPE for PTMD gave excellent results in terms of extraction yield (99.7 ± 2.8) whereas the recoveries for the analogs were not so high but were reproducible as well (64.6 ± 2.6 and 36.8 ± 1.6 for analog 1 and 2, respectively). By means of a recent strategy based on accuracy profiles (β-expectation tolerance interval), the methods were successfully validated. β was fixed at 95% and the acceptability limits at ± 15% as recommended by the FDA. The method was successfully validated for PTMD (29.6-586.54 ng/mL), analog 1 (74.23-742.3 ng/mL) and analog 2 (178.12-890.6 ng/mL). The first concentration level tested was considered as the LLOQ (lower limit of quantification) for PTMD and analog 1 whereas for analog 2, the LLOQ was not the first level tested and was raised to 178.12 ng/mL.

  11. Accuracy and feasibility of three different methods for software-based image fusion in whole-body PET and CT.

    Science.gov (United States)

    Putzer, Daniel; Henninger, Benjamin; Kovacs, Peter; Uprimny, Christian; Kendler, Dorota; Jaschke, Werner; Bale, Reto J

    2016-06-01

    Even as PET/CT provides valuable diagnostic information in a great number of clinical indications, availability of hybrid PET/CT scanners is mainly limited to clinical centers. A software-based image fusion would facilitate combined image reading of CT and PET data sets if hardware image fusion is not available. To analyze the relevance of retrospective image fusion of separately acquired PET and CT data sets, we studied the accuracy, practicability and reproducibility of three different image registration techniques. We evaluated whole-body 18F-FDG-PET and CT data sets of 71 oncologic patients. Images were fused retrospectively using Stealth Station System, Treon (Medtronic Inc., Louisville, CO, USA) equipped with Cranial4 Software. External markers fixed to a vacuum mattress were used as reference for exact repositioning. Registration was repeated using internal anatomic landmarks and Automerge software, assessing accuracy for all three methods, measuring distances of liver representation in CT and PET with reference to a common coordinate system. On first measurement of image fusions with external markers, 53 were successful, 16 feasible and 2 not successful. Using anatomic landmarks, 42 were successful, 26 feasible and 3 not successful. Using Automerge Software only 13 were successful. The mean distance between center points in PET and CT was 7.69±4.96 mm on first, and 7.65±4.2 mm on second measurement. Results with external markers correlate very well and inaccuracies are significantly lower (Pfusion cost-effectively and significantly less time, posing an attractive alternative for PET/CT interpretation when a hybrid scanner is not available.

  12. Survey of Branch Support Methods Demonstrates Accuracy, Power, and Robustness of Fast Likelihood-based Approximation Schemes

    Science.gov (United States)

    Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier

    2011-01-01

    Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409

  13. Cost and accuracy comparison between the diffuse interface method and the geometric volume of fluid method for simulating two-phase flows

    Science.gov (United States)

    Mirjalili, Shahab; Ivey, Christopher Blake; Mani, Ali

    2016-11-01

    The diffuse interface(DI) and volume of fluid(VOF) methods are mass conserving front capturing schemes which can handle large interfacial topology changes in realistic two phase flows. The DI method is a conservative phase field method that tracks an interface with finite thickness spread over a few cells and does not require reinitialization. In addition to having the desirable properties of level set methods for naturally capturing curvature and surface tension forces, the model conserves mass continuously and discretely. The VOF method, which tracks the fractional tagged volume in a cell, is discretely conservative by requiring costly geometric reconstructions of the interface and the fluxes. Both methods however, suffer from inaccuracies in calculation of curvature and surface tension forces. We present a quantitative comparison of these methods in terms of their accuracy, convergence rate, memory, and computational cost using canonical 2D two-phase test cases: damped surface wave, oscillating drop, equilibrium static drop, and dense moving drop. We further compared the models in their ability to handle thin films by looking at the impact of a water drop onto a deep water pool. Considering these results, we suggest qualitative guidelines for using the DI and VOF methods. Supported by ONR.

  14. The accuracy of the Gaussian-and-finite-element-Coulomb (GFC) method for the calculation of Coulomb integrals.

    Science.gov (United States)

    Przybytek, Michal; Helgaker, Trygve

    2013-08-07

    We analyze the accuracy of the Coulomb energy calculated using the Gaussian-and-finite-element-Coulomb (GFC) method. In this approach, the electrostatic potential associated with the molecular electronic density is obtained by solving the Poisson equation and then used to calculate matrix elements of the Coulomb operator. The molecular electrostatic potential is expanded in a mixed Gaussian-finite-element (GF) basis set consisting of Gaussian functions of s symmetry centered on the nuclei (with exponents obtained from a full optimization of the atomic potentials generated by the atomic densities from symmetry-averaged restricted open-shell Hartree-Fock theory) and shape functions defined on uniform finite elements. The quality of the GF basis is controlled by means of a small set of parameters; for a given width of the finite elements d, the highest accuracy is achieved at smallest computational cost when tricubic (n = 3) elements are used in combination with two (γ(H) = 2) and eight (γ(1st) = 8) Gaussians on hydrogen and first-row atoms, respectively, with exponents greater than a given threshold (αmin (G)=0.5). The error in the calculated Coulomb energy divided by the number of atoms in the system depends on the system type but is independent of the system size or the orbital basis set, vanishing approximately like d(4) with decreasing d. If the boundary conditions for the Poisson equation are calculated in an approximate way, the GFC method may lose its variational character when the finite elements are too small; with larger elements, it is less sensitive to inaccuracies in the boundary values. As it is possible to obtain accurate boundary conditions in linear time, the overall scaling of the GFC method for large systems is governed by another computational step-namely, the generation of the three-center overlap integrals with three Gaussian orbitals. The most unfavorable (nearly quadratic) scaling is observed for compact, truly three-dimensional systems

  15. The impact of the fabrication method on the three-dimensional accuracy of an implant surgery template.

    Science.gov (United States)

    Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim

    2017-06-01

    The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. Effect of the revisit interval and temporal upscaling methods on the accuracy of remotely sensed evapotranspiration estimates

    Science.gov (United States)

    Alfieri, Joseph G.; Anderson, Martha C.; Kustas, William P.; Cammalleri, Carmelo

    2017-01-01

    Accurate spatially distributed estimates of actual evapotranspiration (ET) derived from remotely sensed data are critical to a broad range of practical and operational applications. However, due to lengthy return intervals and cloud cover, data acquisition is not continuous over time, particularly for satellite sensors operating at medium ( ˜ 100 m) or finer resolutions. To fill the data gaps between clear-sky data acquisitions, interpolation methods that take advantage of the relationship between ET and other environmental properties that can be continuously monitored are often used. This study sought to evaluate the accuracy of this approach, which is commonly referred to as temporal upscaling, as a function of satellite revisit interval. Using data collected at 20 Ameriflux sites distributed throughout the contiguous United States and representing four distinct land cover types (cropland, grassland, forest, and open-canopy) as a proxy for perfect retrievals on satellite overpass dates, this study assesses daily ET estimates derived using five different reference quantities (incident solar radiation, net radiation, available energy, reference ET, and equilibrium latent heat flux) and three different interpolation methods (linear, cubic spline, and Hermite spline). Not only did the analyses find that the temporal autocorrelation, i.e., persistence, of all of the reference quantities was short, it also found that those land cover types with the greatest ET exhibited the least persistence. This carries over to the error associated with both the various scaled quantities and flux estimates. In terms of both the root mean square error (RMSE) and mean absolute error (MAE), the errors increased rapidly with increasing return interval following a logarithmic relationship. Again, those land cover types with the greatest ET showed the largest errors. Moreover, using a threshold of 20 % relative error, this study indicates that a return interval of no more than 5 days is

  17. A Mixed Methods and Triangulation Model for Increasing the Accuracy of Adherence and Sexual Behaviour Data: The Microbicides Development Programme

    Science.gov (United States)

    Pool, Robert; Montgomery, Catherine M.; Morar, Neetha S.; Mweemba, Oliver; Ssali, Agnes; Gafos, Mitzy; Lees, Shelley; Stadler, Jonathan; Crook, Angela; Nunn, Andrew; Hayes, Richard; McCormack, Sheena

    2010-01-01

    Background The collection of accurate data on adherence and sexual behaviour is crucial in microbicide (and other HIV-related) research. In the absence of a “gold standard” the collection of such data relies largely on participant self-reporting. After reviewing available methods, this paper describes a mixed method/triangulation model for generating more accurate data on adherence and sexual behaviour in a multi-centre vaginal microbicide clinical trial. In a companion paper some of the results from this model are presented [1]. Methodology/Principal Findings Data were collected from a random subsample of 725 women (7.7% of the trial population) using structured interviews, coital diaries, in-depth interviews, counting returned gel applicators, focus group discussions, and ethnography. The core of the model was a customised, semi-structured in-depth interview. There were two levels of triangulation: first, discrepancies between data from the questionnaires, diaries, in-depth interviews and applicator returns were identified, discussed with participants and, to a large extent, resolved; second, results from individual participants were related to more general data emerging from the focus group discussions and ethnography. A democratic and equitable collaboration between clinical trialists and qualitative social scientists facilitated the success of the model, as did the preparatory studies preceding the trial. The process revealed some of the underlying assumptions and routinised practices in “clinical trial culture” that are potentially detrimental to the collection of accurate data, as well as some of the shortcomings of large qualitative studies, and pointed to some potential solutions. Conclusions/Significance The integration of qualitative social science and the use of mixed methods and triangulation in clinical trials are feasible, and can reveal (and resolve) inaccuracies in data on adherence and sensitive behaviours, as well as illuminating aspects

  18. Domain specific MT in use

    DEFF Research Database (Denmark)

    Offersgaard, Lene; Povlsen, Claus; Almsten, Lisbeth Kjeldgaard

    2008-01-01

    The paper focuses on domain specific use of MT with a special focus on SMT in the workflow of a Language Service Provider (LSP). We report on the feedback of post-editors using fluency/adequacy evaluation and the evaluation metric ’Usability’, understood in this context as where users on a three ...

  19. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    Science.gov (United States)

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  20. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    Science.gov (United States)

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation.

  1. A reduced number of mtSNPs saturates mitochondrial DNA haplotype diversity of worldwide population groups.

    Science.gov (United States)

    Salas, Antonio; Amigo, Jorge

    2010-05-03

    The high levels of variation characterising the mitochondrial DNA (mtDNA) molecule are due ultimately to its high average mutation rate; moreover, mtDNA variation is deeply structured in different populations and ethnic groups. There is growing interest in selecting a reduced number of mtDNA single nucleotide polymorphisms (mtSNPs) that account for the maximum level of discrimination power in a given population. Applications of the selected mtSNP panel range from anthropologic and medical studies to forensic genetic casework. This study proposes a new simulation-based method that explores the ability of different mtSNP panels to yield the maximum levels of discrimination power. The method explores subsets of mtSNPs of different sizes randomly chosen from a preselected panel of mtSNPs based on frequency. More than 2,000 complete genomes representing three main continental human population groups (Africa, Europe, and Asia) and two admixed populations ("African-Americans" and "Hispanics") were collected from GenBank and the literature, and were used as training sets. Haplotype diversity was measured for each combination of mtSNP and compared with existing mtSNP panels available in the literature. The data indicates that only a reduced number of mtSNPs ranging from six to 22 are needed to account for 95% of the maximum haplotype diversity of a given population sample. However, only a small proportion of the best mtSNPs are shared between populations, indicating that there is not a perfect set of "universal" mtSNPs suitable for all population contexts. The discrimination power provided by these mtSNPs is much higher than the power of the mtSNP panels proposed in the literature to date. Some mtSNP combinations also yield high diversity values in admixed populations. The proposed computational approach for exploring combinations of mtSNPs that optimise the discrimination power of a given set of mtSNPs is more efficient than previous empirical approaches. In contrast to

  2. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    Science.gov (United States)

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of the standard deviations of all operators

  3. Improvement and Check on the mtDNA extract techniques for termites%白蚁 mtDNA 提取方法改良及检测

    Institute of Scientific and Technical Information of China (English)

    姜丽红; 邹湘武; 宁涤非; 席在星

    2013-01-01

      利用mtDNA多态性进行种类鉴定是一种分子生物学常用方法,从DNA水平对白蚁进行物种鉴别并探讨物种的进化,其必要前提是提取到一定数量和质量的mtDNA.在分离得到线粒体后,分别采用CTAB、SDS 2种方法提取mtDNA.紫外分光光度计检测DNA纯度及浓度,用mtDNA 特异性引物进行PCR 扩增检测.试验证明2种方法均能成功提取白蚁的mtDNA, SDS法提取效果较好.%Species identification with mtDNA polymorphisms is an usual method in molecular biology. To identify the species and to explore the evolution of the termites from the DNA level, it is necessary to extract the mtDNA with enough quantity and good quality. In this study, mitochondria of termites is firstly isolated, then CTAB and SDS methods were employed for the extraction of mtDNA, respectively. Purity and concentration of mtDNA were determined by UV spectrophotometer. PCR amplification of mtDNA was executed by specific primers. Results shows that both CTAB the SDS methods can successfully extracted termite mtDNA, but SDS method shows better performance than CTAB method.

  4. Effects of tangential-type boundary condition discontinuities on the accuracy of the lattice Boltzmann method for heat and mass transfer

    Science.gov (United States)

    Li, Like; AuYeung, Nick; Mei, Renwei; Klausner, James F.

    2016-08-01

    We present a systematic study on the effects of tangential-type boundary condition discontinuities on the accuracy of the lattice Boltzmann equation (LBE) method for Dirichlet and Neumann problems in heat and mass transfer modeling. The second-order accurate boundary condition treatments for continuous Dirichlet and Neumann problems are directly implemented for the corresponding discontinuous boundary conditions. Results from three numerical tests, including both straight and curved boundaries, are presented to show the accuracy and order of convergence of the LBE computations. Detailed error assessments are conducted for the interior temperature or concentration (denoted as a scalar ϕ) and the interior derivatives of ϕ for both types of boundary conditions, for the boundary flux in the Dirichlet problem and for the boundary ϕ values in the Neumann problem. When the discontinuity point on the straight boundary is placed at the center of the unit lattice in the Dirichlet problem, it yields only first-order accuracy for the interior distribution of ϕ, first-order accuracy for the boundary flux, and zeroth-order accuracy for the interior derivatives compared with the second-order accuracy of all quantities of interest for continuous boundary conditions. On the lattice scale, the LBE solution for the interior derivatives near the singularity is largely independent of the resolution and correspondingly the local distribution of the absolute errors is almost invariant with the changing resolution. For Neumann problems, when the discontinuity is placed at the lattice center, second-order accuracy is preserved for the interior distribution of ϕ; and a "superlinear" convergence order of 1.5 for the boundary ϕ values and first-order accuracy for the interior derivatives are obtained. For straight boundaries with the discontinuity point arbitrarily placed within the lattice and curved boundaries, the boundary flux becomes zeroth-order accurate for Dirichlet problems

  5. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Directory of Open Access Journals (Sweden)

    Sung-Hye You

    2017-01-01

    Full Text Available Purpose The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL, for measuring the volume of parathyroid glands. Methods Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D and three-dimensional (3D methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. Results The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. Conclusion The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  6. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Science.gov (United States)

    2017-01-01

    Purpose The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Methods Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. Results The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. Conclusion The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism. PMID:27457337

  7. The mutation rate of the human mtDNA deletion mtDNA4977.

    Science.gov (United States)

    Shenkar, R; Navidi, W; Tavaré, S; Dang, M H; Chomyn, A; Attardi, G; Cortopassi, G; Arnheim, N

    1996-10-01

    The human mitochondrial mutation mtDNA4977 is a 4,977-bp deletion that originates between two 13-bp direct repeats. We grew 220 colonies of cells, each from a single human cell. For each colony, we counted the number of cells and amplified the DNA by PCR to test for the presence of a deletion. To estimate the mutation fate, we used a model that describes the relationship between the mutation rate and the probability that a colony of a given size will contain no mutants, taking into account such factors as possible mitochondrial turnover and mistyping due to PCR error. We estimate that the mutation rate for mtDNA4977 in cultured human cells is 5.95 x 10(-8) per mitochondrial genome replication. This method can be applied to specific chromosomal, as well as mitochondrial, mutations.

  8. The mutation rate of the human mtDNA deletion mtDNA{sup 4977}

    Energy Technology Data Exchange (ETDEWEB)

    Shenkar, R. [Univ. of Colorado Health Science Center, Denver, CO (United States); Navidi, W. [Colorado School of Mines, Golden, CO (United States); Tavare, S. [Univ. of California, Los Angeles, CA (United States)] [and others

    1996-10-01

    The human mitochondrial mutation mtDNA{sup 4977} is a 4,977-bp deletion that originates between two 13-bp direct repeats. We grew 220 colonies of cells, each from a single human cell. For each colony, we counted the number of cells and amplified the DNA by PCR to test for the presence of a deletion. To estimate the mutation rate, we used a model that describes the relationship between the mutation rate and the probability that a colony of a given size will contain no mutants, taking into account such factors as possible mitochondrial turnover and mistyping due to PCR error. We estimate that the mutation rate for mtDNA{sup 4977} in cultured human cells is 5.95 x 10{sup {minus}8} per mitochondrial genome replication. This method can be applied to specific chromosomal, as well as mitochondrial, mutations. 17 refs., 1 fig., 1 tab.

  9. A faster, high precision algorithm for calculating symmetric and asymmetric $M_{T2}$

    CERN Document Server

    Lally, Colin H

    2015-01-01

    A new algorithm for calculating the stransverse mass, $M_{T2}$, in either symmetric or asymmetric situations has been developed which exhibits good stability, high precision and quadratic convergence for the majority of the $M_{T2}$ parameter space, leading to up to a factor of ten increase in speed compared to other $M_{T2}$ calculators of comparable precision. This document describes and validates the methodology used by the algorithm, and provides comparisons both in terms of accuracy and speed with other existing implementations.

  10. Assessing the accuracy and reliability of ultrasonographic three-dimensional parathyroid volume measurement in a patient with secondary hyperparathyroidism: a comparison with the two-dimensional conventional method

    Energy Technology Data Exchange (ETDEWEB)

    You, Sung Hye; Son, Gyu Ri; Lee, Nam Joon [Dept. of Radiology, Korea University Anam Hospital, Seoul (Korea, Republic of); Suh, Sangil; Ryoo, In Seon; Seol, Hae Young [Dept. of Radiology, Korea University Guro Hospital, Seoul (Korea, Republic of); Lee, Young Hen; Seo, Hyung Suk [Dept. of Radiology, Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2017-01-15

    The purpose of this study was to investigate the accuracy and reliability of the semi-automated ultrasonographic volume measurement tool, virtual organ computer-aided analysis (VOCAL), for measuring the volume of parathyroid glands. Volume measurements for 40 parathyroid glands were performed in patients with secondary hyperparathyroidism caused by chronic renal failure. The volume of the parathyroid glands was measured twice by experienced radiologists by two-dimensional (2D) and three-dimensional (3D) methods using conventional sonograms and the VOCAL with 30°angle increments before parathyroidectomy. The specimen volume was also measured postoperatively. Intraclass correlation coefficients (ICCs) and the absolute percentage error were used for estimating the reproducibility and accuracy of the two different methods. The ICC value between two measurements of the 2D method and the 3D method was 0.956 and 0.999, respectively. The mean absolute percentage error of the 2D method and the 3D VOCAL technique was 29.56% and 5.78%, respectively. For accuracy and reliability, the plots of the 3D method showed a more compact distribution than those of the 2D method on the Bland-Altman graph. The rotational VOCAL method for measuring the parathyroid gland is more accurate and reliable than the conventional 2D measurement. This VOCAL method could be used as a more reliable follow-up imaging modality in a patient with hyperparathyroidism.

  11. Analysing the accuracy of pavement performance models in the short and long terms: GMDH and ANFIS methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, T.

    2016-01-01

    The accuracy of pavement performance prediction is a critical part of pavement management and directly influences maintenance and rehabilitation strategies. Many models with various specifications have been proposed by researchers and used by agencies. This study presents nine variables affecting pa

  12. Dose evaluation using multiple-aliquot quartz OSL: Test of methods and a new protocol for improved accuracy and precision

    DEFF Research Database (Denmark)

    Jain, M.; Bøtter-Jensen, L.; Singhvi, A.K.

    2003-01-01

    Multiple-aliquot quartz OSL dose-response curves often suffer from substantial variability in the luminescence output from identically treated aliquots (scatter) that leads to large uncertainties in the equivalent-dose estimates. In this study, normalisation and its bearing on the accuracy...

  13. Melanesian mtDNA complexity.

    Directory of Open Access Journals (Sweden)

    Jonathan S Friedlaender

    Full Text Available Melanesian populations are known for their diversity, but it has been hard to grasp the pattern of the variation or its underlying dynamic. Using 1,223 mitochondrial DNA (mtDNA sequences from hypervariable regions 1 and 2 (HVR1 and HVR2 from 32 populations, we found the among-group variation is structured by island, island size, and also by language affiliation. The more isolated inland Papuan-speaking groups on the largest islands have the greatest distinctions, while shore dwelling populations are considerably less diverse (at the same time, within-group haplotype diversity is less in the most isolated groups. Persistent differences between shore and inland groups in effective population sizes and marital migration rates probably cause these differences. We also add 16 whole sequences to the Melanesian mtDNA phylogenies. We identify the likely origins of a number of the haplogroups and ancient branches in specific islands, point to some ancient mtDNA connections between Near Oceania and Australia, and show additional Holocene connections between Island Southeast Asia/Taiwan and Island Melanesia with branches of haplogroup E. Coalescence estimates based on synonymous transitions in the coding region suggest an initial settlement and expansion in the region at approximately 30-50,000 years before present (YBP, and a second important expansion from Island Southeast Asia/Taiwan during the interval approximately 3,500-8,000 YBP. However, there are some important variance components in molecular dating that have been overlooked, and the specific nature of ancestral (maternal Austronesian influence in this region remains unresolved.

  14. Methods of evaluation of accuracy with multiple essential parameters for eddy current measurement of pressure tube to calandria tube gap in CANDU reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shokralla, S., E-mail: shaddy.shokralla@opg.com [Ontario Power Generation, IMS NDE Projects, Ajax, Ontario (Canada); Krause, T.W., E-mail: thomas.krause@rmc.ca [Royal Military College of Canada, Kingston, Ontario (Canada)

    2014-01-15

    The purpose of inspection qualification of a particular inspection system is to show that it meets applicable inspection specification requirements. Often a requirement of the inspection system is that it meets a particular accuracy. In the case of a system with multiple inputs accompanied by additional influential parameters, calculation of the system's output accuracy can be formidable. Measurement of pressure-tube to calandria tube gap in CANDU reactors using an eddy current based technique is presented as a particular example of a system where multiple essential parameters combine to generate a final uncertainty for the inspection system. This paper outlines two possible methods of calculating such a system's accuracy, and discusses the advantages and disadvantages of each. (author)

  15. POSITIONAL ACCURACY ASSESSMENT OF THE OPENSTREETMAP BUILDINGS LAYER THROUGH AUTOMATIC HOMOLOGOUS PAIRS DETECTION: THE METHOD AND A CASE STUDY

    OpenAIRE

    M. A. Brovelli; M. Minghini; M. E. Molinari; Zamboni, G.

    2016-01-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings b...

  16. Evaluation of the efficiency and accuracy of new methods for atmospheric opacity and radiative transfer calculations in planetary general circulation model simulations

    Science.gov (United States)

    Zube, Nicholas Gerard; Zhang, Xi; Natraj, Vijay

    2016-10-01

    General circulation models often incorporate simple approximations of heating between vertically inhomogeneous layers rather than more accurate but computationally expensive radiative transfer (RT) methods. With the goal of developing a GCM package that can model both solar system bodies and exoplanets, it is vital to examine up-to-date RT models to optimize speed and accuracy for heat transfer calculations. Here, we examine a variety of interchangeable radiative transfer models in conjunction with MITGCM (Hill and Marshall, 1995). First, for atmospheric opacity calculations, we test gray approximation, line-by-line, and correlated-k methods. In combination with these, we also test RT routines using 2-stream DISORT (discrete ordinates RT), N-stream DISORT (Stamnes et al., 1988), and optimized 2-stream (Spurr and Natraj, 2011). Initial tests are run using Jupiter as an example case. The results can be compared in nine possible configurations for running a complete RT routine within a GCM. Each individual combination of opacity and RT methods is contrasted with the "ground truth" calculation provided by the line-by-line opacity and N-stream DISORT, in terms of computation speed and accuracy of the approximation methods. We also examine the effects on accuracy when performing these calculations at different time step frequencies within MITGCM. Ultimately, we will catalog and present the ideal RT routines that can replace commonly used approximations within a GCM for a significant increase in calculation accuracy, and speed comparable to the dynamical time steps of MITGCM. Future work will involve examining whether calculations in the spatial domain can also be reduced by smearing grid points into larger areas, and what effects this will have on overall accuracy.

  17. Two-dimensional inversion of MT (magnetotelluric) data; MT ho no nijigen inversion kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Ito, S.; Okuno, M.; Ushijima, K.; Mizunaga, H. [Kyushu University, Fukuoka (Japan). Faculty of Engineering

    1997-05-27

    A program has been developed to conduct inversion analysis of two-dimensional model using MT data, accurately. For the developed program, finite element method (FEM) was applied to the section of sequential analysis. A method in which Jacobian matrix is calculated only one first time and is inversely analyzed by fixing this during the repetition, and a method in which Jacobian matrix is corrected at each repetition of inversion analysis, were compared mutually. As a result of the numerical simulation, it was revealed that the Jacobian correction method provided more stable convergence for the simple 2D model, and that the calculation time is almost same as that of the Jacobian fixation method. To confirm the applicability of this program to actually measured data, results obtained from this program were compared with those from the Schlumberger method analysis by using MT data obtained in the Hatchobara geothermal area. Consequently, it was demonstrated that the both are well coincided mutually. 17 refs., 7 figs.

  18. Accuracy of recommended sampling and assay methods for the determination of plasma-free and urinary fractionated metanephrines in the diagnosis of pheochromocytoma and paraganglioma: a systematic review.

    Science.gov (United States)

    Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme

    2017-06-01

    To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p sampling compared with 24-h urine, 95 vs. 90% (p sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.

  19. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    Science.gov (United States)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  20. INFLUENCE OF RIVER BED ELEVATION SURVEY CONFIGURATIONS AND INTERPOLATION METHODS ON THE ACCURACY OF LIDAR DTM-BASED RIVER FLOW SIMULATIONS

    Directory of Open Access Journals (Sweden)

    J. R. Santillan

    2016-09-01

    Full Text Available In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS, zig-zag (ZZ, river banks-centerline (RBCL, and river banks-centerline-zig-zag (RBCLZZ, and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs

  1. Discussion on the Application Method of Perception Accuracy in Product Design%论感知精度在产品设计中的应用方法

    Institute of Scientific and Technical Information of China (English)

    薛青

    2012-01-01

    It discussed the concept of perception accuracy and the critical importance in design,combined with the process of learning and design practice,analyzed the application methods of perception accuracy in product design,and designers in the product design process,how to improve the accuracy of their perceptions,and then analyzed that it had the great benefit to train perception accuracy for improving the design level.It perceived that perception accuracy could make the designers be exercised to hold on the accuracy and beauty of products,and it had a positive effect to realize the implementation and improve the consumer value.%论述了感知精度的概念及其在设计中的至关重要性,并结合学习和设计实践的过程,分析了感知精度在产品设计中的应用方法,以及设计者在产品设计过程中如何提升自己的感知精度,进而分析了感知精度的培养对提高设计水平大有裨益,最后得出感知精度能够锻炼设计者正确把握产品的精度和美感,并对产品物化的实现和提高消费价值有着积极的作用。

  2. MPS analysis of the mtDNA hypervariable regions on the MiSeq with improved enrichment.

    Science.gov (United States)

    Holland, Mitchell M; Wilson, Laura A; Copeland, Sarah; Dimick, Gloria; Holland, Charity A; Bever, Robert; McElhoe, Jennifer A

    2017-07-01

    The non-coding displacement (D) loop of the human mitochondrial (mt) genome contains two hypervariable regions known as HVR1 and HVR2 that are most often analyzed by forensic DNA laboratories. The massively parallel sequencing (MPS) protocol from Illumina (Human mtDNA D-Loop Hypervariable Region protocol) utilizes four sets of established PCR primer pairs for the initial amplification (enrichment) step that span the hypervariable regions. Transposase adapted (TA) sequences are attached to the 5'-end of each primer, allowing for effective library preparation prior to analysis on the MiSeq, and AmpliTaq Gold DNA polymerase is the enzyme recommended for amplification. The amplification conditions were modified by replacing AmpliTaq Gold with TaKaRa Ex Taq® HS, along with an enhanced PCR buffer system. The resulting method was compared to the recommended protocol and to a conventional non-MPS approach used in an operating forensic DNA laboratory. The modified amplification conditions gave equivalent or improved results, including when amplifying low amounts of DNA template from hair shafts which are a routine evidence type in forensic mtDNA cases. Amplification products were successfully sequenced using an MPS approach, addressing sensitivity of library preparation, evaluation of precision and accuracy through repeatability and reproducibility, and mixture studies. These findings provide forensic laboratories with a robust and improved enrichment method as they begin to implement the D-loop protocol from Illumina. Given that Ex Taq® HS is a proofreading enzyme, using this approach should allow for improved analysis of low-level mtDNA heteroplasmy.

  3. Functional characterization of cadmium-responsive garlic gene AsMT2b: A new member of metallothionein family

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new gene of metallothionein (MT) family was cloned from garlic (Allium sativum) seedlings using RACE method and designated AsMT2b. The full length of AsMT2b cDNA was 520 bp encoding 80 amino acids. The deduced amino acids of AsMT2b showed that AsMT2b contained the characteristic structure of type 2 MT proteins, but the number and arrangement of the cysteine residues in the N- and C-terminal domains was different from other type 2 MT proteins. Semi-quantitative reverse transcriptase-PCR showed that transcript levels of AsMT2b were enhanced only in response to higher concentrations or longer incubation time of Cd. Such an expression pattern of AsMT2b greatly differed from that of other type 2 MT genes. Yeast cells transformed with this gene had improved resistance to Cd. AsMT2b overexpressing Arabidopsis showed stronger Cd tolerance and higher Cd accumulation compared with wild-type plants. These results suggest that AsMT2b should be useful in phytoremediation of Cd-polluted soil in the future.

  4. Comparison of the Accuracy and Performance of Different Numbers of Classes in Discretised Solution Method for Population Balance Model

    Directory of Open Access Journals (Sweden)

    Zhenliang Li

    2016-01-01

    Full Text Available One way of solving population balance model (PBM in a time efficient way is by means of discretisation of the population property of interest. A computational grid, for example, vi+1=kvi (vi is the volume of particle in class i, could be used to classify the particles in discretisation techniques. However, there are still disagreements in the appropriate number of classes divided by the grids. In this study, the different numbers of classes for solving PBM were compared in terms of accuracy and performance to describe the particle size distribution (PSD from the flocculation of activated sludge. It is found that the simulated PSDs are similar to the experimental data for all the geometric grids (vi+1:vi≤2, and there is no obvious difference among the values of calibrated parameter, ratio of breakage rate coefficient and collision efficiency, for each velocity gradient. However, the simulation results with less error could be obtained with larger number of classes, and more computational times, which show exponential relationship with the number of classes, are needed. Considering numerical accuracy and efficiency, the classes 35 or a geometric grid with factor 1.6, aligning with the Fibonacci sequence (vi+vi-1≈vi+1, is recommended for the particles in the size range of 5.5~1086 μm.

  5. Measurement of cosmic ray chemical composition at Mt. Chacaltaya

    Energy Technology Data Exchange (ETDEWEB)

    Ogio, S.; Kakimoto, F.; Harada, D.; Tokunou, H.; Burgoa, O.; Tsunesada, Y. [Institute of Technology, Dept. of Physics, Tokuo (Japan); Shirasaki, Y. [National Space Development Agency of Japan, Tsukuba (Japan); Gotoh, E.; Nakatani, H.; Shimoda, S.; Nishi, K.; Tajima, N.; Yamada, Y. [The Institute of Physical and Chemical Research, Wako, Saitama (Japan); Kaneko, T. [Okayama University, Dept. of Physics, Oakayama (Japan); Matsubara, Y. [Nagoya University, Solar-Terrestrial Environment Laboratory, Nagoya, Aichi (Japan); Miranda, P.; Velarde, A. [Universidad Mayor de San Andres, Institute de Investigaciones Fisicas, La Paz (Bolivia); Mizumoto, T. [National Astronomical Observatory, Mitaka, Tokyo (Japan); Yoshii, H.; Morizawa, A. [Ehime University, Dept. of Physics, Matsuyama, Ehime (Japan); Murakami, K. [Nagoya University of Foreign Studies, Nissin, Aichi (Japan); Toyoda, Y. [Fukui University of Technology, Faculty of General Education, Fukui (Japan)

    2001-10-01

    BASJE group has measured the chemical composition of primary cosmic rays with energies around the knee with several methods. These measurements show that the averaged mass number of cosmic ray particles increases with energy up to the knee. In order to measure the chemical composition in much wider energy range, it was started a new experiment at Mt. Chacaltaya in 2000.

  6. Further Development of the FFT-based Method for Atomistic Modeling of Protein Folding and Binding under Crowding: Optimization of Accuracy and Speed.

    Science.gov (United States)

    Qin, Sanbo; Zhou, Huan-Xiang

    2014-07-08

    Recently, we (Qin, S.; Zhou, H. X. J. Chem. Theory Comput.2013, 9, 4633-4643) developed the FFT-based method for Modeling Atomistic Proteins-crowder interactions, henceforth FMAP. Given its potential wide use for calculating effects of crowding on protein folding and binding free energies, here we aimed to optimize the accuracy and speed of FMAP. FMAP is based on expressing protein-crowder interactions as correlation functions and evaluating the latter via fast Fourier transform (FFT). The numerical accuracy of FFT improves as the grid spacing for discretizing space is reduced, but at increasing computational cost. We sought to speed up FMAP calculations by using a relatively coarse grid spacing of 0.6 Å and then correcting for discretization errors. This strategy was tested for different types of interactions (hard-core repulsion, nonpolar attraction, and electrostatic interaction) and over a wide range of protein-crowder systems. We were able to correct for the numerical errors on hard-core repulsion and nonpolar attraction by an 8% inflation of atomic hard-core radii and on electrostatic interaction by a 5% inflation of the magnitudes of protein atomic charges. The corrected results have higher accuracy and enjoy a speedup of more than 100-fold over those obtained using a fine grid spacing of 0.15 Å. With this optimization of accuracy and speed, FMAP may become a practical tool for realistic modeling of protein folding and binding in cell-like environments.

  7. No recombination of mtDNA after heteroplasmy for 50 generations in the mouse maternal germline

    Science.gov (United States)

    Hagström, Erik; Freyer, Christoph; Battersby, Brendan J.; Stewart, James B.; Larsson, Nils-Göran

    2014-01-01

    Variants of mitochondrial DNA (mtDNA) are commonly used as markers to track human evolution because of the high sequence divergence and exclusive maternal inheritance. It is assumed that the inheritance is clonal, i.e. that mtDNA is transmitted between generations without germline recombination. In contrast to this assumption, a number of studies have reported the presence of recombinant mtDNA molecules in cell lines and animal tissues, including humans. If germline recombination of mtDNA is frequent, it would strongly impact phylogenetic and population studies by altering estimates of coalescent time and branch lengths in phylogenetic trees. Unfortunately, this whole area is controversial and the experimental approaches have been widely criticized as they often depend on polymerase chain reaction (PCR) amplification of mtDNA and/or involve studies of transformed cell lines. In this study, we used an in vivo mouse model that has had germline heteroplasmy for a defined set of mtDNA mutations for more than 50 generations. To assess recombination, we adapted and validated a method based on cloning of single mtDNA molecules in the λ phage, without prior PCR amplification, followed by subsequent mutation analysis. We screened 2922 mtDNA molecules and found no germline recombination after transmission of mtDNA under genetically and evolutionary relevant conditions in mammals. PMID:24163253

  8. 基于新一代测序技术的mtDNA突变检测方法学的建立%A Novel Method for Detecting Mutation of Mitochondrial DNA in Tumor Cells by Next Generation Sequencing

    Institute of Scientific and Technical Information of China (English)

    李薇薇; 王晨曲; 黄启超; 李德洋; 尚玉奎; 邢金良

    2013-01-01

    目的:大量研究证实线粒体DNA(mtDNA)突变与肿瘤发生及进展密切相关,但使用传统测序方法难以高通量、高精确度的检测mtDNA突变,为此本研究建立了基于新一代测序技术的mtDNA突变检测方法.方法:提取肝癌患者癌、癌旁组织以及外周血细胞总DNA,利用PCR技术对线粒体基因组进行富集并对PCR产物进行平末端、粘性末端连接或对PCR引物进行氨基修饰,构建mtDNA测序文库.经Illumina HiSeq 2000平台测序后利用生物信息学方法与人类mtDNA参考序列进行比对,并进行测序数据分析.结果:通过对不同质量基因组DNA进行评估后,发现三对引物法适用于大部分DNA样本的mtDNA富集.进一步我们发现PCR引物的氨基修饰可显著提高测序数据覆盖均一性,降低测序成本.结论:本研究利用新一代测序技术通过对线粒体DNA富集方法以及测序覆盖度均一性进行优化,建立了一套灵敏、特异、高通量的mtDNA突变检测策略,为mtDNA突变与疾病研究提供了新方法.

  9. MT-CYB mutations in hypertrophic cardiomyopathy

    DEFF Research Database (Denmark)

    Hagen, Christian M; Aidt, Frederik H; Havndrup, Ole

    2013-01-01

    Mitochondrial dysfunction is a characteristic of heart failure. Mutations in mitochondrial DNA, particularly in MT-CYB coding for cytochrome B in complex III (CIII), have been associated with isolated hypertrophic cardiomyopathy (HCM). We hypothesized that MT-CYB mutations might play an important...

  10. Quantifying the accuracy of the tumor motion and area as a function of acceleration factor for the simulation of the dynamic keyhole magnetic resonance imaging method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Danny; Pollock, Sean; Keall, Paul, E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006 (Australia); Greer, Peter B. [School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW 2308, Australia and Department of Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); Kim, Taeho [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney, NSW 2006, Australia and Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23219 (United States)

    2016-05-15

    Purpose: The dynamic keyhole is a new MR image reconstruction method for thoracic and abdominal MR imaging. To date, this method has not been investigated with cancer patient magnetic resonance imaging (MRI) data. The goal of this study was to assess the dynamic keyhole method for the task of lung tumor localization using cine-MR images reconstructed in the presence of respiratory motion. Methods: The dynamic keyhole method utilizes a previously acquired a library of peripheral k-space datasets at similar displacement and phase (where phase is simply used to determine whether the breathing is inhale to exhale or exhale to inhale) respiratory bins in conjunction with central k-space datasets (keyhole) acquired. External respiratory signals drive the process of sorting, matching, and combining the two k-space streams for each respiratory bin, thereby achieving faster image acquisition without substantial motion artifacts. This study was the first that investigates the impact of k-space undersampling on lung tumor motion and area assessment across clinically available techniques (zero-filling and conventional keyhole). In this study, the dynamic keyhole, conventional keyhole and zero-filling methods were compared to full k-space dataset acquisition by quantifying (1) the keyhole size required for central k-space datasets for constant image quality across sixty four cine-MRI datasets from nine lung cancer patients, (2) the intensity difference between the original and reconstructed images in a constant keyhole size, and (3) the accuracy of tumor motion and area directly measured by tumor autocontouring. Results: For constant image quality, the dynamic keyhole method, conventional keyhole, and zero-filling methods required 22%, 34%, and 49% of the keyhole size (P < 0.0001), respectively, compared to the full k-space image acquisition method. Compared to the conventional keyhole and zero-filling reconstructed images with the keyhole size utilized in the dynamic keyhole

  11. Intervertebral anticollision constraints improve out-of-plane translation accuracy of a single-plane fluoroscopy-to-CT registration method for measuring spinal motion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Cheng-Chung; Tsai, Tsung-Yuan; Hsu, Shih-Jung [Institute of Biomedical Engineering, National Taiwan University, Taiwan 10051 (China); Lu, Tung-Wu [Institute of Biomedical Engineering, National Taiwan University, Taiwan 10051, Republic of China and Department of Orthopaedic Surgery, School of Medicine, National Taiwan University, Taiwan 10617 (China); Shih, Ting-Fang [Department of Medical Imaging, National Taiwan University, Taiwan 10051 (China); Wang, Ting-Ming [Department of Orthopaedic Surgery, National Taiwan University Hospital, Taiwan 10051 (China)

    2013-03-15

    Purpose: The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. Methods: The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopy images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. Results: The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. Conclusions: The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was

  12. The role of MT2-MMP in cancer progression

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Emiko [Department of Molecular Pathology, Graduate School of Medicine and Health Sciences, Osaka University, Suita, Osaka 565-0871 (Japan); Yana, Ikuo [Department of Molecular Pathology, Graduate School of Medicine and Health Sciences, Osaka University, Suita, Osaka 565-0871 (Japan); Takeda Pharmaceutical Co. Ltd., Japan Development Center, Osaka 540-8645 (Japan); Fujita, Chisato; Irifune, Aiko; Takeda, Maki; Madachi, Ayako; Mori, Seiji; Hamada, Yoshinosuke; Kawaguchi, Naomasa [Department of Molecular Pathology, Graduate School of Medicine and Health Sciences, Osaka University, Suita, Osaka 565-0871 (Japan); Matsuura, Nariaki, E-mail: Matsuura@sahs.med.osaka-u.ac.jp [Department of Molecular Pathology, Graduate School of Medicine and Health Sciences, Osaka University, Suita, Osaka 565-0871 (Japan)

    2010-03-05

    The role of MT2-MMP in cancer progression remains to be elucidated in spite of many reports on MT1-MMP. Using a human fibrosarcoma cell, HT1080 and a human gastric cancer cell, TMK-1, endogenous expression of MT1-MMP or MT2-MMP was suppressed by siRNA induction to examine the influence of cancer progression in vitro and in vivo. In HT1080 cells, positive both in MT1-MMP and MT2-MMP, the migration as well as the invasion was impaired by MT1-MMP or MT2-MMP suppression. Also cell proliferation in three dimensional (3D) condition was inhibited by MT1-MMP or MT2-MMP suppression and tumor growth in the nude mice transplanted with tumor cells were reduced either MT1-MMP or MT2-MMP suppression with a prolongation of survival time in vivo. MT2-MMP suppression induces more inhibitory effects on 3D proliferation and in vivo tumor growth than MT1-MMP. On the other hand, TMK-1 cells, negative in MT1-MMP and MMP-2 but positive in MT2-MMP, all the migratory, invasive, and 3D proliferative activities in TMK-1 are decreased only by MT2-MMP suppression. These results indicate MT2-MMP might be involved in the cancer progression more than or equal to MT1-MMP independently of MMP-2 and MT1-MMP.

  13. Mt. Pinatubo, Phillippines - Perspective View

    Science.gov (United States)

    1996-01-01

    The effects of the June 15, 1991, eruption of Mt. Pinatubo continue to affect the lives of people living near the volcano on the island of Luzon in the Philippines. The eruption produced a large amount of volcanic debris that was deposited on the flanks of the volcano as part of pyroclastic flows. This perspective view looking toward the east shows the western flank of the volcano where most of these pyroclastic flows were deposited.This debris consists of ash and boulders that mix with water after heavy rains to form volcanic mudflows called lahars. Lahars are moving rivers of concrete slurry that are highly erosive. They can sweep down existing river valleys, carving deep canyons where the slopes are steep, or depositing a mixture of fine ash and larger rocks on the gentler slopes. The deposits left from a lahar soon solidify into a material similar to concrete, but while they are moving, lahars are dynamic features, and in a single river valley the active channel may change locations within a few minutes or hours. These changes represent a significant natural hazard to local communities.The topographic data were collected by NASA's airborne imaging radar AIRSAR instrument on November 29, 1996. Colors are from the French SPOT satellite imaging data in both visible and infrared wavelengths collected in February 1996. Areas of vegetation appear red and areas without vegetation appear light blue. River valleys radiate out from the summit of the volcano (upper center). Since the eruption, lahars have stripped these valleys of any vegetation. The Pasig-Potrero River flows to the northeast off the summit in the upper right of the image.Scientists have been using airborne radar data collected by the AIRSAR instrument in their studies of the aftereffects of the Mt. Pinatubo eruption. AIRSAR collected imaging radar data over the volcano during a mission to the Pacific Rim region in late 1996 and on a follow-up mission to the area in late 2000. These data sets along with

  14. Diagnostic accuracy of Kato-Katz, FLOTAC, Baermann, and PCR methods for the detection of light-intensity hookworm and Strongyloides stercoralis infections in Tanzania.

    Science.gov (United States)

    Knopp, Stefanie; Salim, Nahya; Schindler, Tobias; Karagiannis Voules, Dimitrios A; Rothen, Julian; Lweno, Omar; Mohammed, Alisa S; Singo, Raymond; Benninghoff, Myrna; Nsojo, Anthony A; Genton, Blaise; Daubenberger, Claudia

    2014-03-01

    Sensitive diagnostic tools are crucial for an accurate assessment of helminth infections in low-endemicity areas. We examined stool samples from Tanzanian individuals and compared the diagnostic accuracy of a real-time polymerase chain reaction (PCR) with the FLOTAC technique and the Kato-Katz method for hookworm and the Baermann method for Strongyloides stercoralis detection. Only FLOTAC had a higher sensitivity than the Kato-Katz method for hookworm diagnosis; the sensitivities of PCR and the Kato-Katz method were equal. PCR had a very low sensitivity for S. stercoralis detection. The cycle threshold values of the PCR were negatively correlated with the logarithm of hookworm egg and S. stercoralis larvae counts. The median larvae count was significantly lower in PCR false negatives than true positives. All methods failed to detect very low-intensity infections. New diagnostic approaches are needed for monitoring of progressing helminth control programs, confirmation of elimination, or surveillance of disease recrudescence.

  15. High-accuracy 2D digital image correlation measurements using low-cost imaging lenses: implementation of a generalized compensation method

    Science.gov (United States)

    Pan, Bing; Yu, Liping; Wu, Dafang

    2014-02-01

    The ideal pinhole imaging model commonly assumed for an ordinary two-dimensional digital image correlation (2D-DIC) system is neither perfect nor stable because of the existence of small out-of-plane motion of the test sample surface that occurred after loading, small out-of-plane motion of the sensor target due to temperature variation of a camera and unavoidable geometric distortion of an imaging lens. In certain cases, these disadvantages can lead to significant errors in the measured displacements and strains. Although a high-quality bilateral telecentric lens has been strongly recommended to be used in the 2D-DIC system as an essential optical component to achieve high-accuracy measurement, it is not generally applicable due to its fixed field of view, limited depth of focus and high cost. To minimize the errors associated with the imperfectness and instability of a common 2D-DIC system using a low-cost imaging lens, a generalized compensation method using a non-deformable reference sample is proposed in this work. With the proposed method, the displacement of the reference sample rigidly attached behind the test sample is first measured using 2D-DIC, and then it is fitted using a parametric model. The fitted parametric model is then used to correct the displacements of the deformed sample to remove the influences of these unfavorable factors. The validity of the proposed compensation method is first verified using out-of-plane translation, out-of-plane rotation, in-plane translation tests and their combinations. Uniaxial tensile tests of an aluminum specimen were also performed to quantitatively examine the strain accuracy of the proposed compensation method. Experiments show that the proposed compensation method is an easy-to-implement yet effective technique for achieving high-accuracy deformation measurement using an ordinary 2D-DIC system.

  16. Provision of Controlled Motion Accuracy of Industrial Robots and Multiaxis Machines by the Method of Integrated Deviations Correction

    Science.gov (United States)

    Krakhmalev, O. N.; Petreshin, D. I.; Fedonin, O. N.

    2016-04-01

    There is a developed method of correction of the integrated motion deviations of industrial robots and multiaxis machines, which are caused by the primary geometrical deviations of their segments. This method can be used to develop a control system providing the motion correction for industrial robots and multiaxis machines.

  17. Accuracy evaluation of a new three-dimensional reproduction method of edentulous dental casts, and wax occlusion rims with jaw relation.

    Science.gov (United States)

    Yuan, Fu-Song; Sun, Yu-Chun; Wang, Yong; Lü, Pei-Jun

    2013-09-01

    The article introduces a new method for three-dimensional reproduction of edentulous dental casts, and wax occlusion rims with jaw relation by using a commercial high-speed line laser scanner and reverse engineering software and evaluates the method's accuracy in vitro. The method comprises three main steps: (i) acquisition of the three-dimensional stereolithography data of maxillary and mandibular edentulous dental casts and wax occlusion rims; (ii) acquisition of the three-dimensional stereolithography data of jaw relations; and (iii) registration of these data with the reverse engineering software and completing reconstruction. To evaluate the accuracy of this method, dental casts and wax occlusion rims of 10 edentulous patients were used. The lengths of eight lines between common anatomic landmarks were measured directly on the casts and occlusion rims by using a vernier caliper and on the three-dimensional computerized images by using the software measurement tool. The direct data were considered as the true values. The paired-samples t-test was used for statistical analysis. The mean differences between the direct and the computerized measurements were mostly less than 0.04 mm and were not significant (P>0.05). Statistical significance among 10 patients was assessed using one-way analysis of variance (Pdental casts, wax occlusion rims, and jaw relations was achieved. The proposed method enables the visualization of occlusion from different views and would help to meet the demand for the computer-aided design of removable complete dentures.

  18. Evaluating Measurement Accuracy

    CERN Document Server

    Rabinovich, Semyon G

    2010-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. Although multiple measurements are the focus of current theory, single measurements are the ones most commonly used. This book answers fundamental questions not addressed by present theory, such as how to discover the complete uncertainty of a measurement result. In developing a general theory of processing experimental data, this book, for the first time, presents the postulates of the theory of measurements. It introduces several new terms and definitions about the relationship between the accuracy of measuring instruments and measurements utilizing these instruments. It also offers well-grounded and practical methods for combining the components of measurement inaccuracy. From developing the theory of indirect measurements to proposing new methods of reduction in place of the traditional ones, this work encompasses the ful...

  19. Accuracy of two dental and one skeletal age estimation methods in 6-16 year old Gujarati children.

    Science.gov (United States)

    Patel, Purv S; Chaudhary, Anjani Ramachandra; Dudhia, Bhavin B; Bhatia, Parul V; Soni, Naresh C; Jani, Yesha V

    2015-01-01

    Age estimation is of immense importance not only for personal identification but also for treatment planning in medicine and dentistry. Chronologic age conveys only a rough approximation of the maturational status of a person, hence dental and skeletal ages have been explored as maturity indicators since decades. The tooth maturation provides a valuable indicator of dental age and serves as a better index of the maturation of a child as compared to other maturity indicators. To test the applicability of Demirjian's and Willem's dental age assessment methods as well as Greulich and Pyle skeletal age assessment method in children residing in Gandhinagar district. The study consisted of randomly selected 180 subjects (90 males and 90 females) ranging from 6 to 16 years age and residing in Gandhinagar district. Dental age estimation was performed from radiovisuograph (RVG) images of mandibular teeth of left quadrant by both Demirjian's and Willem's methods. Skeletal age estimation was done from right hand wrist radiograph by Greulich and Pyle method. The differences between the chronological age and the estimated dental and skeletal ages were statistically tested using paired 't' test. The correlation between chronological age, dental and skeletal age estimation methods was confirmed statistically by Pearson's correlation. The reproducibility of the estimations was statistically tested using the Pearson's Chi-square test. Amongst the age estimation methods used in this study, the Willem's dental age estimation method proved to be the most accurate and consistent. Although various age estimation methods do exist, the results are varied in different populations due to ethnic differences. However, till new tables are formulated, the Willem's method (Modified Demirjian method) can be accurately applied to estimate chronological age for the population residing in Gandhinagar district.

  20. A Method for Accuracy of Genetic Evaluation by Utilization of Canadian Genetic Evaluation Information to Improve Heilongjiang Holstein Herds

    Institute of Scientific and Technical Information of China (English)

    DING Ke-wei; TAKEO Kayaba

    2004-01-01

    The objectives of this study were to set up a new genetic evaluation procedure to predict the breeding values of Holstein herds in Heilongjiang Province of China for milk and fat production by utilizing Canadian pedigree and genetic evaluation information and to compare the breeding values of the sires from different countries. The data used for evaluating young sires for the Chinese Holstein population consisted of records selected from 21 herds in HeiIongjiang Province. The first lactation records of 2 496 daughters collected in 1989 and 2000 were analyzed. A single-trait animal model including a fixed herd-year effect, random animal and residual effects was used by utilizing Canadian pedigree and genetic evaluation information of 5 126 sires released from the Canadian Dairy Network in August 2000. The BLUP procedure was used to evaluate all cattle in this study and the Estimated Breeding Values (EBV)for milk and fat production of 6 697 cattle (including 673 sires and 6 024 cows) were predicted. The genetic levels of the top 100 sires originated from different countries were compared.Unlike the BLUP procedure that is being used in conjunction with the single-trait sire model in Heilongjiang Province of China now, the genetic evaluation procedure used in this study not only can be used simultaneously to evaluate sires and cows but also increase the accuracy of evaluation due to using the relationships and genetic values of the Canadian evaluated sires with more daughters. The results showed that the new procedure was useful for genetic evaluation of dairy herds and the comparison of the breeding values of these sires imported from different countries showed that a significant genetic improvement has been achieved for milk production of the Heilongjiang Holstein dairy population by importing sires from foreign countries, especially from the United States due to the higher breeding values.

  1. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    Science.gov (United States)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  2. Optimized mtDNA Control Region Primer Extension Capture Analysis for Forensically Relevant Samples and Highly Compromised mtDNA of Different Age and Origin

    Directory of Open Access Journals (Sweden)

    Mayra Eduardoff

    2017-09-01

    Full Text Available The analysis of mitochondrial DNA (mtDNA has proven useful in forensic genetics and ancient DNA (aDNA studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR is commonly sequenced using established Sanger-type Sequencing (STS protocols involving fragment sizes down to approximately 150 base pairs (bp. Recent developments include Massively Parallel Sequencing (MPS of (multiplex PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less, and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples, and tested challenging forensic samples (n = 2 as well as compromised solid tissue samples (n = 15 up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS

  3. Molecular cloning and pharmacological characterization of rat melatonin MT1 and MT2 receptors.

    Science.gov (United States)

    Audinot, Valérie; Bonnaud, Anne; Grandcolas, Line; Rodriguez, Marianne; Nagel, Nadine; Galizzi, Jean-Pierre; Balik, Ales; Messager, Sophie; Hazlerigg, David G; Barrett, Perry; Delagrange, Philippe; Boutin, Jean A

    2008-05-15

    In order to interpret the effects of melatonin ligands in rats, we need to determine their activity at the receptor subtype level in the corresponding species. Thus, the rat melatonin rMT(1) receptor was cloned using DNA fragments for exon 1 and 2 amplified from rat genomic DNA followed by screening of a rat genomic library for the full length exon sequences. The rat rMT(2) receptor subtype was cloned in a similar manner with the exception of exon 1 which was identified by screening a rat genomic library with exon 1 of the human hMT(2) receptor. The coding region of these receptors translates proteins of 353 and 364 amino acids, respectively, for rMT(1) and rMT(2). A 55% homology was observed between both rat isoforms. The entire contiguous rat MT(1) and MT(2) receptor coding sequences were cloned, stably expressed in CHO cells and characterized in binding assay using 2-[(125)I]-Iodomelatonin. The dissociation constants (K(d)) for rMT(1) and rMT(2) were 42 and 130 pM, respectively. Chemically diverse compounds previously characterized at human MT(1) and MT(2) receptors were evaluated at rMT(1) and rMT(2) receptors, for their binding affinity and functionality in [(35)S]-GTPgammaS binding assay. Some, but not all, compounds shared a similar binding affinity and functionality at both rat and human corresponding subtypes. A different pharmacological profile of the MT(1) subtype has also been observed previously between human and ovine species. These in vitro results obtained with the rat melatonin receptors are thus of importance to understand the physiological roles of each subtype in animal models.

  4. Application of the PM6 semi-empirical method to modeling proteins enhances docking accuracy of AutoDock

    National Research Council Canada - National Science Library

    Bikadi, Zsolt; Hazai, Eszter

    2009-01-01

    .... AutoDockTools software, the interface for preparing input files for one of the most widely used docking programs AutoDock 4, utilizes the Gasteiger partial charge calculation method for both protein...

  5. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    Science.gov (United States)

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; Young, Mitchell T. H.; Kochunas, Brendan; Graham, Aaron; Larsen, Edward W.; Downar, Thomas; Godfrey, Andrew

    2016-12-01

    A consistent "2D/1D" neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-class computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.

  6. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    Energy Technology Data Exchange (ETDEWEB)

    Collins, Benjamin, E-mail: collinsbs@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Stimpson, Shane, E-mail: stimpsonsg@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Rd., Oak Ridge, TN 37831 (United States); Kelley, Blake W., E-mail: kelleybl@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Young, Mitchell T.H., E-mail: youngmit@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Kochunas, Brendan, E-mail: bkochuna@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Graham, Aaron, E-mail: aarograh@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Larsen, Edward W., E-mail: edlarsen@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Downar, Thomas, E-mail: downar@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Godfrey, Andrew, E-mail: godfreyat@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Rd., Oak Ridge, TN 37831 (United States)

    2016-12-01

    A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-class computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.

  7. A simple high accuracy phase locked loop method%一种内同步高精度锁相环技术研究

    Institute of Scientific and Technical Information of China (English)

    耿攀; 吴卫民; 陈建明; 叶银忠; 刘以建

    2011-01-01

    本文提出一种适用于数字控制的内同步高精度锁相环方法.本方法实现简单,在系统正常工作情况下可以高精度跟踪电网同步信号.在失去电网同步信号后依然可以使系统按照原来的频率和相位继续稳定运行,更适用于需要在并网、离网自由切换工作的系统.本文首先详细分析了此处带内同步程序锁相环的工作原理,然后推导了采用这种锁相方法的具体精度,最后通过1kW样机实验验证了提出方法的正确性.%This paper proposes a simple high accuracy inner-synchronization phase locked loop method used for digital control. This method is very easy to be realized. When the system works under normal condition, it could track the grid synchronized signal in high accuracy. When the system losses the grid synchronized signal, it could keep stable operation under the former frequency and phase position, and it is more suitable for the systems which need to switch between grid-tied and stand-alone mode. In this paper, the working principle of the phase locked loop with inner-synchronized program is analyzed at first. The accuracy using this phase locked loop method is derived. Finally, experimental results on a l kW prototype prove the validity of the proposed method.

  8. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    Science.gov (United States)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  9. Lagrangian Finite-Element Method for the Simulation of K-BKZ Fluids with Third Order Accuracy

    DEFF Research Database (Denmark)

    Marin, José Manuel Román; Rasmussen, Henrik K.

    2009-01-01

    system attached to the particles is discretized by ten-node quadratic tetrahedral elements using Cartesian coordinates and the pressure by linear interpolation inside these elements. The spatial discretization of the governing equations follows the mixed Galerkin finite element method. The time integral...... is discretized by a quadratic interpolation in time. The convergence of the method in time and space was demonstrated on the free surface problem of a filament stretched between two plates, considering the axisymmetric case as well as the growth of non-axisymmetric disturbances on the free surface. The scheme...

  10. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Science.gov (United States)

    2010-07-01

    ... criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM... sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision. (1) For motor vehicle diesel fuel and diesel fuel additives subject to the 15 ppm sulfur standard of §...

  11. A 3D reconstruction method of the body envelope from biplanar X-rays: Evaluation of its accuracy and reliability.

    Science.gov (United States)

    Nérot, Agathe; Choisne, Julie; Amabile, Célia; Travert, Christophe; Pillet, Hélène; Wang, Xuguang; Skalli, Wafa

    2015-12-16

    The aim of this study was to propose a novel method for reconstructing the external body envelope from the low dose biplanar X-rays of a person. The 3D body envelope was obtained by deforming a template to match the surface profiles in two X-rays images in three successive steps: global morphing to adopt the position of a person and scale the template׳s body segments, followed by a gross deformation and a fine deformation using two sets of pre-defined control points. To evaluate the method, a biplanar X-ray acquisition was obtained from head to foot for 12 volunteers in a standing posture. Up to 172 radio-opaque skin markers were attached to the body surface and used as reference positions. Each envelope was reconstructed three times by three operators. Results showed a bias lower than 7mm and a confidence interval (95%) of reproducibility lower than 6mm for all body parts, comparable to other existing methods matching a template onto stereographic photographs. The proposed method offers the possibility of reconstructing body shape in addition to the skeleton using a low dose biplanar X-rays system.

  12. Accuracy of the DLPNO-CCSD(T) method for non-covalent bond dissociation enthalpies from coinage metal cation complexes

    KAUST Repository

    Minenkov, Yury

    2015-08-27

    The performance of the domain based local pair-natural orbital coupled-cluster (DLPNO-CCSD(T)) method has been tested to reproduce the experimental gas phase ligand dissociation enthalpy in a series of Cu+, Ag+ and Au+ complexes. For 33 Cu+ - non-covalent ligand dissociation enthalpies all-electron calculations with the same method result in MUE below 2.2 kcal/mol, although a MSE of 1.4 kcal/mol indicates systematic underestimation of the experimental values. Inclusion of scalar relativistic effects for Cu either via effective core potential (ECP) or Douglass-Kroll-Hess Hamiltonian, reduces the MUE below 1.7 kcal/mol and the MSE to -1.0 kcal/mol. For 24 Ag+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) method results in a mean unsigned error (MUE) below 2.1 kcal/mol and vanishing mean signed error (MSE). For 15 Au+ - non-covalent ligand dissociation enthalpies the DLPNO-CCSD(T) methods provides larger MUE and MSE, equal to 3.2 and 1.7 kcal/mol, which might be related to poor precision of the experimental measurements. Overall, for the combined dataset of 72 coinage metal ion complexes DLPNO-CCSD(T) results in a MUE below 2.2 kcal/mol and an almost vanishing MSE. As for a comparison with computationally cheaper density functional theory (DFT) methods, the routinely used M06 functional results in MUE and MSE equal to 3.6 and -1.7 kca/mol. Results converge already at CC-PVTZ quality basis set, making highly accurate DLPNO-CCSD(T) estimates to be affordable for routine calculations (single-point) on large transition metal complexes of > 100 atoms.

  13. Biomarker Validation for Aging: Lessons from mtDNA Heteroplasmy Analyses in Early Cancer Detection

    Directory of Open Access Journals (Sweden)

    Peter E. Barker

    2009-11-01

    Full Text Available The anticipated biological and clinical utility of biomarkers has attracted significant interest recently. Aging and early cancer detection represent areas active in the search for predictive and prognostic biomarkers. While applications differ, overlapping biological features, analytical technologies and specific biomarker analytes bear comparison. Mitochondrial DNA (mtDNA as a biomarker in both biological models has been evaluated. However, it remains unclear whether mtDNA changes in aging and cancer represent biological relationships that are causal, incidental, or a combination of both. This article focuses on evaluation of mtDNA-based biomarkers, emerging strategies for quantitating mtDNA admixtures, and how current understanding of mtDNA in aging and cancer evolves with introduction of new technologies. Whether for cancer or aging, lessons from mtDNA based biomarker evaluations are several. Biological systems are inherently dynamic and heterogeneous. Detection limits for mtDNA sequencing technologies differ among methods for low-level DNA sequence admixtures in healthy and diseased states. Performance metrics of analytical mtDNA technology should be validated prior to application in heterogeneous biologically-based systems. Critical in evaluating biomarker performance is the ability to distinguish measurement system variance from inherent biological variance, because it is within the latter that background healthy variability as well as high-value, disease-specific information reside.

  14. Criteria of GenCall score to edit marker data and methods to handle missing markers have an influence on accuracy of genomic predictions

    DEFF Research Database (Denmark)

    Edriss, Vahid; Guldbrandtsen, Bernt; Lund, Mogens Sandø

    2013-01-01

    contained 1071 Jersey bulls that were genotyped with the Illumina Bovine 50K chip. After preliminary editing, 39227 SNP remained in the dataset. Four methods to handle missing genotypes were: 1) BEAGLE: missing markers were imputed using Beagle 3.3 software, 2) COMMON: missing genotypes at a locus were...... that missing genotypes should be imputed in order to improve genomic prediction. Editing the marker data with stringent threshold on GenCall (GC) scores and then imputing the discarded genotypes did not lead to higher accuracy. All marker genotypes with a GC score over 0.15 should be retained for genomic...

  15. A Software Module for High-Accuracy Calibration of Rings and Cylinders on CMM using Multi-Orientation Techniques (Multi-Step and Reversal methods)

    DEFF Research Database (Denmark)

    Tosello, Guido; De Chiffre, Leonardo

    . The Centre for Geometrical Metrology (CGM) at the Technical University of Denmark takes care of free form measurements, in collaboration with DIMEG, University of Padova, Italy. The present report describes a software module, ROUNDCAL, to be used for high-accuracy calibration of rings and cylinders....... The purpose of the software is to calculate the form error and the least square circle of rings and cylinders by mean of average of pontwise measuring results becoming from so-called multi-orientation techniques (both reversal and multi-step methods) in order to eliminate systematic errors of CMM ....

  16. A Software Module for High-Accuracy Calibration of Rings and Cylinders on CMM using Multi-Orientation Techniques (Multi-Step and Reversal methods)

    DEFF Research Database (Denmark)

    Tosello, Guido; De Chiffre, Leonardo

    . The Centre for Geometrical Metrology (CGM) at the Technical University of Denmark takes care of free form measurements, in collaboration with DIMEG, University of Padova, Italy. The present report describes a software module, ROUNDCAL, to be used for high-accuracy calibration of rings and cylinders....... The purpose of the software is to calculate the form error and the least square circle of rings and cylinders by mean of average of pontwise measuring results becoming from so-called multi-orientation techniques (both reversal and multi-step methods) in order to eliminate systematic errors of CMM ....

  17. Accuracy of a low priced liquid-based method for cervical cytology in 632 women referred for colposcopy after a positive Pap smear.

    Science.gov (United States)

    van Hemel, B M; Buikema, H J; Groen, H; Suurmeijer, A J H

    2009-08-01

    The aim of this quality controlling study was to determine the accuracy of liquid-based cytology (LBC) with the Turbitec cytocentrifuge technique. Cervical smears of 632 women, who were referred to our CIN outpatient department, after at least two smears with ASCUS or higher were evaluated and compared with the histological outcome. In 592 cases the smears revealed abnormalities of squamous epithelium, and in 40 cases the abnormalities of glandular epithelium. In the group of squamous epithelium abnormalities, the sensitivity for LSIL was 39.7% and the specificity was 89.2%; for the LSIL+ group, these values were 89.4% and 91.4%, respectively. For HSIL the sensitivity was 68.3% and the specificity 92.8%, for the HSIL+ group 82.3% and 92.3%, respectively. The ASCUS rate was low (2.4%). The Turbitec cytocentrifuge method was proved to be a very good LBC method for cervical smears. Because of a comparable accuracy together with a lower price, this LBC method outweighs commercial alternatives.

  18. High Accuracy On-line Measurement Method of Motion Error on Machine Tools Straight-going Parts

    Institute of Scientific and Technical Information of China (English)

    苏恒; 洪迈生; 魏元雷; 李自军

    2003-01-01

    Harmonic suppression, non-periodic and non-closing in straightness profile error that will bring about harmonic component distortion in measurement result are analyzed. The countermeasure-a novel accurate two-probe method in time domain is put forward to measure straight-going component motion error in machine tools based on the frequency domain 3-point method after symmetrical continuation of probes' primitive signal. Both straight-going component motion error in machine tools and the profile error in workpiece that is manufactured on this machine can be measured at the same time. The information is available to diagnose the fault origin of machine tools. The analysis result is proved to be correct by the experiment.

  19. Diagnostic accuracy comparison between clinical signs and hemoglobin color scale as screening methods in the diagnosis of anemia in children

    OpenAIRE

    Leal,Luciana Pedrosa; Mônica M. Osório

    2006-01-01

    OBJECTIVES: to compare the validity and reproducibility of clinical signs with the World Health Organization hemoglobin color scale. METHODS: Two hundred six children in the age range of 6-23 months, at the Instituto Materno Infantil Prof. Fernando Figueira, IMIP, were assessed. Two examiners evaluated the clinical signs and the hemoglobin color scale of each child at the different times. The hemoglobin value was used as a standard for validation. RESULTS: in more than 90% of cases the agreem...

  20. Novel Molecular and Computational Methods Improve the Accuracy of Insertion Site Analysis in Sleeping Beauty-Induced Tumors

    OpenAIRE

    Benjamin T Brett; Katherine E Berquam-Vrieze; Kishore Nannapaneni; Jian Huang; Todd E Scheetz; Dupuy, Adam J.

    2011-01-01

    The recent development of the Sleeping Beauty (SB) system has led to the development of novel mouse models of cancer. Unlike spontaneous models, SB causes cancer through the action of mutagenic transposons that are mobilized in the genomes of somatic cells to induce mutations in cancer genes. While previous methods have successfully identified many transposon-tagged mutations in SB-induced tumors, limitations in DNA sequencing technology have prevented a comprehensive analysis of large tumor ...

  1. Toward an organ based dose prescription method for the improved accuracy of murine dose in orthovoltage x-ray irradiators

    Energy Technology Data Exchange (ETDEWEB)

    Belley, Matthew D.; Wang, Chu [Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States); Nguyen, Giao; Gunasingha, Rathnayaka [Duke Radiation Dosimetry Laboratory, Duke University Medical Center, Durham, North Carolina 27710 (United States); Chao, Nelson J. [Department of Medicine, Duke University Medical Center, Durham, North Carolina 27710 and Department of Immunology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Chen, Benny J. [Department of Medicine, Duke University Medical Center, Durham, North Carolina 27710 (United States); Dewhirst, Mark W. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Yoshizumi, Terry T., E-mail: terry.yoshizumi@duke.edu [Duke Radiation Dosimetry Laboratory, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 (United States); Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2014-03-15

    Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.

  2. Mean Gravity Anomaly Prediction Techniques with a Comparative Analysis of the Accuracy and Economy of Selected Methods.

    Science.gov (United States)

    1982-03-01

    mean gravity anomaly. To do this, it is necessary to apply a data averaging integral of the form ( Heiskanen and Moritz, 1967): -- I a b Ag f b Ag(x,y...Rapp for practical applica- tion on digital computers. Details can be found in Heiskanen and Moritz (1967), and Rapp (1964). Although least squares...methods: Institute for Physicalische Geodasie, Technische Hochschule, Darmstadt, Federal Republic of Germany. Heiskanen , W., and Moritz, H., 1967, Physical

  3. Accuracy of the staggered-grid finite-difference method of the acoustic wave equation for marine seismic reflection modeling

    Institute of Scientific and Technical Information of China (English)

    QIAN Jin; WU Shiguo; CUI Ruofei

    2013-01-01

    Seismic wave modeling is a cornerstone of geophysical data acquisition,processing,and interpretation,for which finite-difference methods are often applied.In this paper,we extend the velocitypressure formulation of the acoustic wave equation to marine seismic modeling using the staggered-grid finite-difference method.The scheme is developed using a fourth-order spatial and a second-order temporal operator.Then,we define a stability coefficient (SC) and calculate its maximum value under the stability condition.Based on the dispersion relationship,we conduct a detailed dispersion analysis for submarine sediments in terms of the phase and group velocity over a range of angles,stability coefficients,and orders.We also compare the numerical solution with the exact solution for a P-wave line source in a homogeneous submarine model.Additionally,the numerical results determined by a Marmousi2 model with a rugged seafloor indicate that this method is sufficient for modeling complex submarine structures.

  4. Presequence-Independent Mitochondrial Import of DNA Ligase Facilitates Establishment of Cell Lines with Reduced mtDNA Copy Number.

    Directory of Open Access Journals (Sweden)

    Domenico Spadafora

    Full Text Available Due to the essential role played by mitochondrial DNA (mtDNA in cellular physiology and bioenergetics, methods for establishing cell lines with altered mtDNA content are of considerable interest. Here, we report evidence for the existence in mammalian cells of a novel, low- efficiency, presequence-independent pathway for mitochondrial protein import, which facilitates mitochondrial uptake of such proteins as Chlorella virus ligase (ChVlig and Escherichia coli LigA. Mouse cells engineered to depend on this pathway for mitochondrial import of the LigA protein for mtDNA maintenance had severely (up to >90% reduced mtDNA content. These observations were used to establish a method for the generation of mouse cell lines with reduced mtDNA copy number by, first, transducing them with a retrovirus encoding LigA, and then inactivating in these transductants endogenous Lig3 with CRISPR-Cas9. Interestingly, mtDNA depletion to an average level of one copy per cell proceeds faster in cells engineered to maintain mtDNA at low copy number. This makes a low-mtDNA copy number phenotype resulting from dependence on mitochondrial import of DNA ligase through presequence-independent pathway potentially useful for rapidly shifting mtDNA heteroplasmy through partial mtDNA depletion.

  5. Presequence-Independent Mitochondrial Import of DNA Ligase Facilitates Establishment of Cell Lines with Reduced mtDNA Copy Number.

    Science.gov (United States)

    Spadafora, Domenico; Kozhukhar, Natalia; Alexeyev, Mikhail F

    2016-01-01

    Due to the essential role played by mitochondrial DNA (mtDNA) in cellular physiology and bioenergetics, methods for establishing cell lines with altered mtDNA content are of considerable interest. Here, we report evidence for the existence in mammalian cells of a novel, low- efficiency, presequence-independent pathway for mitochondrial protein import, which facilitates mitochondrial uptake of such proteins as Chlorella virus ligase (ChVlig) and Escherichia coli LigA. Mouse cells engineered to depend on this pathway for mitochondrial import of the LigA protein for mtDNA maintenance had severely (up to >90%) reduced mtDNA content. These observations were used to establish a method for the generation of mouse cell lines with reduced mtDNA copy number by, first, transducing them with a retrovirus encoding LigA, and then inactivating in these transductants endogenous Lig3 with CRISPR-Cas9. Interestingly, mtDNA depletion to an average level of one copy per cell proceeds faster in cells engineered to maintain mtDNA at low copy number. This makes a low-mtDNA copy number phenotype resulting from dependence on mitochondrial import of DNA ligase through presequence-independent pathway potentially useful for rapidly shifting mtDNA heteroplasmy through partial mtDNA depletion.

  6. Stellar color regression: a spectroscopy based method for color calibration to a few mmag accuracy and the recalibration of Stripe 82

    CERN Document Server

    Yuan, Haibo; Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu

    2014-01-01

    In this paper, we propose a spectroscopy based Stellar Color Regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the SDSS Stripe 82 data (Ivezic et al; I07 hereafter). With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero point errors present in the photometric catalog of Stripe 82, and improve the color calibration by a factor of 2 -- 3. Our study also reveals some small but significant magnitude dependence errors in z-band for some CCDs. Such errors are likely to be present in all the SDSS photome...

  7. American Association of Veterinary Parasitologists' review of veterinary fecal flotation methods and factors influencing their accuracy and use--is there really one best technique?

    Science.gov (United States)

    Ballweber, L R; Beugnet, F; Marchiondo, A A; Payne, P A

    2014-07-30

    The principle of fecal flotation is based on the ability of a solution to allow less dense material (including parasite elements) to rise to the top. However, there are numerous factors that will influence the accuracy and use of such a theoretically simple technique. Whether or not centrifugation is used appears to have an impact on the ability to detect some parasites, but not others. Using a flotation solution with a relatively high specific gravity favors the simultaneous flotation of the diagnostic stages of many different parasites while, at the same time, making recognition of some more difficult because of distortion as well as the amount of debris in the preparation. Dilution methods tend to be less accurate because they require extrapolation; however, they are quicker to perform, in part, because of the cleaner preparation. Timing is a critical factor in the success of all flotation methods, as is technical ability of the personnel involved. Thus, simplicity, low costs and time savings have generally favored gravitational flotation techniques (including the McMaster technique and its modifications). How accurate the method needs to be is dependent upon the purpose of its use and choice of method requires an understanding of analytical sensitivity and expected levels of egg excretion. In some instances where the difference between, for example, 0 and 50 eggs per gram is insignificant with regards to management decisions, less accurate methods will suffice. In others, where the presence of a parasite means treatment of the animal regardless of the numbers of eggs present, methods with higher analytical sensitivities will be required, particularly for those parasites that pass few eggs. For other uses, such as the Fecal Egg Count Reduction Test, accuracy may become critical. Therefore, even though recommendations for standardized fecal flotation procedures have been promoted in the past, it is clear that the factors are too numerous to allow for the

  8. A Residual Replacement Strategy for Improving the Maximum Attainable Accuracy of Communication-Avoiding Krylov Subspace Methods

    Science.gov (United States)

    2012-04-20

    NVIDIA, Oracle, and Samsung , U.S. DOE grants DE-SC0003959, DE-AC02-05-CH11231, Lawrence Berkeley National Laboratory, and NSF SDCI under Grant Number OCI...gradient method [19]. Van Rosendale’s implementation was motivated by exposing more parallelism using the PRAM model. Chronopoulous and Gear later created...matrix for no additional communication cost. The additional computation cost is O( s2 ) per s steps. For terms in 2. above, we have 2 choices. The rst

  9. Analysis of accuracy in pointing with redundant hand-held tools: a geometric approach to the uncontrolled manifold method.

    Science.gov (United States)

    Campolo, Domenico; Widjaja, Ferdinan; Xu, Hong; Ang, Wei Tech; Burdet, Etienne

    2013-04-01

    This work introduces a coordinate-independent method to analyse movement variability of tasks performed with hand-held tools, such as a pen or a surgical scalpel. We extend the classical uncontrolled manifold (UCM) approach by exploiting the geometry of rigid body motions, used to describe tool configurations. In particular, we analyse variability during a static pointing task with a hand-held tool, where subjects are asked to keep the tool tip in steady contact with another object. In this case the tool is redundant with respect to the task, as subjects control position/orientation of the tool, i.e. 6 degrees-of-freedom (dof), to maintain the tool tip position (3dof) steady. To test the new method, subjects performed a pointing task with and without arm support. The additional dof introduced in the unsupported condition, injecting more variability into the system, represented a resource to minimise variability in the task space via coordinated motion. The results show that all of the seven subjects channeled more variability along directions not directly affecting the task (UCM), consistent with previous literature but now shown in a coordinate-independent way. Variability in the unsupported condition was only slightly larger at the endpoint but much larger in the UCM.

  10. 78 FR 44187 - Montana Disaster # MT-00079

    Science.gov (United States)

    2013-07-23

    ... From the Federal Register Online via the Government Publishing Office SMALL BUSINESS ADMINISTRATION Montana Disaster MT-00079 AGENCY: U.S. Small Business Administration. ACTION: Notice. SUMMARY... have been determined to be adversely affected by the disaster: Primary Counties: Blaine,...

  11. The influence of accuracy, grid size, and interpolation method on the hydrological analysis of LiDAR derived dems: Seneca Nation of Indians, Irving NY

    Science.gov (United States)

    Clarkson, Brian W.

    Light Detection and Ranging (LiDAR) derived Digital Elevation Models (DEMs) provide accurate, high resolution digital surfaces for precise topographic analysis. The following study investigates the accuracy of LiDAR derived DEMs by calculating the Root Mean Square Error (RMSE) of multiple interpolation methods with grid cells ranging from 0.5 to 10-meters. A raster cell with smaller dimensions will drastically increase the amount of detail represented in the DEM by increasing the number of elevation values across the study area. Increased horizontal resolutions have raised the accuracy of the interpolated surfaces and the contours generated from the digitized landscapes. As the raster grid cells decrease in size, the level of detail of hydrological processes will significantly improve compared to coarser resolutions including the publicly available National Elevation Datasets (NEDs). Utilizing a LiDAR derived DEM with the lowest RMSE as the 'ground truth', watershed boundaries were delineated for a sub-basin of the Clear Creek Watershed within the territory of the Seneca Nation of Indians located in Southern Erie County, NY. An investigation of the watershed area and boundary location revealed considerable differences comparing the results of applying different interpretation methods on DEM datasets of different horizontal resolutions. Stream networks coupled with watersheds were used to calculate peak flow values for the 10-meter NEDs and LiDAR derived DEMs.

  12. IMPROVEMENT OF ACCURACY OF RADIATIVE HEAT TRANSFER DIFFERENTIAL APPROXIMATION METHOD FOR MULTI DIMENSIONAL SYSTEMS BY MEANS OF AUTO-ADAPTABLE BOUNDARY CONDITIONS

    Directory of Open Access Journals (Sweden)

    K. V. Dobrego

    2015-01-01

    Full Text Available Differential approximation is derived from radiation transfer equation by averaging over the solid angle. It is one of the more effective methods for engineering calculations of radia- tive heat transfer in complex three-dimensional thermal power systems with selective and scattering media. The new method for improvement of accuracy of the differential approximation based on using of auto-adaptable boundary conditions is introduced in the paper. The  efficiency  of  the  named  method  is  proved  for  the  test  2D-systems.  Self-consistent auto-adaptable boundary conditions taking into consideration the nonorthogonal component of the incident to the boundary radiation flux are formulated. It is demonstrated that taking in- to consideration of the non- orthogonal incident flux in multi-dimensional systems, such as furnaces, boilers, combustion chambers improves the accuracy of the radiant flux simulations and to more extend in the zones adjacent to the edges of the chamber.Test simulations utilizing the differential approximation method with traditional boundary conditions, new self-consistent boundary conditions and “precise” discrete ordinates method were performed. The mean square errors of the resulting radiative fluxes calculated along the boundary of rectangular and triangular test areas were decreased 1.5–2 times by using auto- adaptable boundary conditions. Radiation flux gaps in the corner points of non-symmetric sys- tems are revealed by using auto-adaptable boundary conditions which can not be obtained by using the conventional boundary conditions.

  13. Remote reference processing in MT survey using GPS clock; MT ho ni okeru GPS wo mochiita jikoku doki system

    Energy Technology Data Exchange (ETDEWEB)

    Yamane, K.; Inoue, J.; Takasugi, S. [Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Kosuge, S. [DRICO Co. Ltd., Tokyo (Japan)

    1996-05-01

    A report is given about the application of a synchronizing system using clock signals from GPS satellites to a remote reference method which is a technique to reject noise from the MT method. This system uses the C/A code out of the L1 band waves from NAVSTAR/GPS satellites. The new system was operated in MT method-using investigations conducted at China Peninsula, Aichi Prefecture, and Izu Peninsula, Shizuoka Prefecture, with the reference points placed several 100km away in Iwate Prefecture on both occasions. It was found as the result that it is basically possible to catch signals from the GPS at any place, that the signals are accurate enough to be applied to time synchronization for the MT method, and that the signals assure a far remote reference method with a separation of several 100km between the sites involved. The referencing process at high frequencies whose feasibility had been doubted proved a success when highly correlated signals were exchanged between two stations over a distance of several 100km. 5 refs., 9 figs.

  14. Family of Quantum Sources for Improving Near Field Accuracy in Transducer Modeling by the Distributed Point Source Method

    Directory of Open Access Journals (Sweden)

    Dominique Placko

    2016-10-01

    Full Text Available The distributed point source method, or DPSM, developed in the last decade has been used for solving various engineering problems—such as elastic and electromagnetic wave propagation, electrostatic, and fluid flow problems. Based on a semi-analytical formulation, the DPSM solution is generally built by superimposing the point source solutions or Green’s functions. However, the DPSM solution can be also obtained by superimposing elemental solutions of volume sources having some source density called the equivalent source density (ESD. In earlier works mostly point sources were used. In this paper the DPSM formulation is modified to introduce a new kind of ESD, replacing the classical single point source by a family of point sources that are referred to as quantum sources. The proposed formulation with these quantum sources do not change the dimension of the global matrix to be inverted to solve the problem when compared with the classical point source-based DPSM formulation. To assess the performance of this new formulation, the ultrasonic field generated by a circular planer transducer was compared with the classical DPSM formulation and analytical solution. The results show a significant improvement in the near field computation.

  15. Current methods of assessing the accuracy of three-dimensional soft tissue facial predictions: technical and clinical considerations.

    Science.gov (United States)

    Khambay, B; Ullah, R

    2015-01-01

    Since the introduction of three-dimensional (3D) orthognathic planning software, studies have reported on their predictive ability. The aim of this study was to highlight the limitations of the current methods of analysis. The predicted 3D soft tissue image was compared to the postoperative soft tissue. For the full face, the maximum and 95th and 90th percentiles, the percentage of 3D mesh points ≤ 2 mm, and the root mean square (RMS) error, were calculated. For specific anatomical regions, the percentage of 3D mesh points ≤ 2 mm and the distance between the two meshes at 10 landmarks were determined. For the 95th and 90th percentiles, the maximum difference ranged from 7.7 mm to 2.2 mm and from 3.7 mm to 1.5 mm, respectively. The absolute mean distance ranged from 0.98 mm to 0.56 mm and from 0.91 mm to 0.50 mm, respectively. The percentage of mesh with ≤ 2 mm for the full face was 94.4-85.2% and 100-31.3% for anatomical regions. The RMS error ranged from 2.49 mm to 0.94 mm. The majority of mean linear distances between the surfaces were ≤ 0.8 mm, but increased for the mean absolute distance. At present the use of specific anatomical regions is more clinically meaningful than the full face. It is crucial to understand these and adopt a protocol for conducting such studies.

  16. Differential uncertainty analysis for evaluating the accuracy of S-parameter retrieval methods for electromagnetic properties of metamaterial slabs.

    Science.gov (United States)

    Hasar, Ugur Cem; Barroso, Joaquim J; Sabah, Cumali; Kaya, Yunus; Ertugrul, Mehmet

    2012-12-17

    We apply a complete uncertainty analysis, not studied in the literature, to investigate the dependences of retrieved electromagnetic properties of two MM slabs (the first one with only split-ring resonators (SRRs) and the second with SRRs and a continuous wire) with single-band and dual-band resonating properties on the measured/simulated scattering parameters, the slab length, and the operating frequency. Such an analysis is necessary for the selection of a suitable retrieval method together with the correct examination of exotic properties of MM slabs especially in their resonance regions. For this analysis, a differential uncertainty model is developed to monitor minute changes in the dependent variables (electromagnetic properties of MM slabs) in functions of independent variables (scattering (S-) parameters, the slab length, and the operating frequency). Two complementary approaches (the analytical approach and the dispersion model approach) each with different strengths are utilized to retrieve the electromagnetic properties of various MM slabs, which are needed for the application of the uncertainty analysis. We note the following important results from our investigation. First, uncertainties in the retrieved electromagnetic properties of the analyzed MM slabs drastically increase when values of electromagnetic properties shrink to zero or near resonance regions where S-parameters exhibit rapid changes. Second, any low-loss or medium-loss inside the MM slabs due to an imperfect dielectric substrate or a finite conductivity of metals can decrease these uncertainties near resonance regions because these losses hinder abrupt changes in S-parameters. Finally, we note that precise information of especially the slab length and the operating frequency is a prerequisite for accurate analysis of exotic electromagnetic properties of MM slabs (especially multiband MM slabs) near resonance regions.

  17. A critical analysis of the combined usage of protein localization prediction methods: Increasing the number of independent data sets can reduce the accuracy of predicted mitochondrial localization

    Science.gov (United States)

    Lythgow, Kieren T.; Hudson, Gavin; Andras, Peter; Chinnery, Patrick F.

    2011-01-01

    In the absence of a comprehensive experimentally derived mitochondrial proteome, several bioinformatic approaches have been developed to aid the identification of novel mitochondrial disease genes within mapped nuclear genetic loci. Often, many classifiers are combined to increase the sensitivity and specificity of the predictions. Here we show that the greatest sensitivity and specificity are obtained by using a combination of seven carefully selected classifiers. We also show that increasing the number of independent prediction methods can paradoxically decrease the accuracy of predicting mitochondrial localization. This approach will help to accelerate the identification of new mitochondrial disease genes by providing a principled way for the selection for combination of appropriate prediction methods of mitochondrial localization of proteins. PMID:21195798

  18. The nucleotide sequence of metallothioneins (MT) in liver of the Kafue lechwe (Kobus leche kafuensis) and their potential as biomarkers of heavy metal pollution of the Kafue River.

    Science.gov (United States)

    M'kandawire, Ethel; Syakalima, Michelo; Muzandu, Kaampwe; Pandey, Girja; Simuunza, Martin; Nakayama, Shouta M M; Kawai, Yusuke K; Ikenaka, Yoshinori; Ishizuka, Mayumi

    2012-09-15

    The study determined heavy metal concentrations and MT1 nucleotide sequence [phylogeny] in liver of the Kafue lechwe. Applicability of MT1 as a biomarker of pollution was assessed. cDNA-encoding sequences for lechwe MT1 were amplified by RT-PCR to characterize the sequence of MT1 which was subjected to BLAST searching at NCBI. Phylogenetic relationships were based on pairwise matrix of sequence divergences calculated by Clustal W. Phylogenetic tree was constructed by NJ method using PHILLIP program. Metals were extracted by acid digestion and concentrations of Cr, Co, Cu, Zn, Cd, Pb, and Ni were determined using an AAS. MT1 mRNA expression levels were measured by quantitative comparative real-time RT-PCR. Lechwe MT1 has a length of 183bp, which encode for MT1 proteins of 61AA, which include 20 cysteines. Nucleotide sequence of lechwe MT1 showed identity with sheep MT (97%) and cattle MT1E (97%). Phylogenetic tree revealed that lechwe MT1 was clustered with sheep MT and cattle MT1E. Cu and Ni concentrations and MT1 mRNA expression levels of lechwe from Blue Lagoon were significantly higher than those from Lochinvar (p<0.05). Concentrations of Cd and Cu, Co and Cu, Co and Pb, Ni and Cu, and Ni and Cr were positively correlated. Spearman's rank correlations also showed positive correlations between Cu and Co concentrations and MT mRNA expression. PCA further suggested that MT mRNA expression was related to Zn and Cd concentrations. Hepatic MT1 mRNA expression in lechwe can be used as biomarker of heavy metal pollution.

  19. Research Progress on Mitochondrial DNA%线粒体DNA(mtDNA)的研究进展

    Institute of Scientific and Technical Information of China (English)

    王存芳; 曾勇庆; 杜立新; 高秀华

    2001-01-01

    本文概述了mtDNA的基本特征和研究方法;介绍了人及各种畜禽mtDNA的研究现状,并对mtDNA的研究作了展望。%This contribution briefly represented fundamental properties and methods of studies on mitochondrial DNA. The pre sent condition of studies among humans and animals were presented respectively. The studies on mitochondrial DNA were also forcasted.

  20. Numerical modeling of 3-D terrain effect on MT field

    Institute of Scientific and Technical Information of China (English)

    徐世浙; 阮百尧; 周辉; 陈乐寿; 徐师文

    1997-01-01

    Using the boundary element method, the numerical modeling problem of three-dimensional terrain effect on magnetotelluric (MT) field is solved. This modeling technique can be run on PC in the case of adopting special net division. The result of modeling test for 2-D terrain by this modeling technique is basically coincident with that by 2-D modeling technique, but there is a great difference between the results of 3-D and 2-D modeling for 3-D terrain.

  1. Accuracy evaluation of a new three-dimensional reproduction method of edentulous dental casts, and wax occlusion rims with jaw relation

    Institute of Scientific and Technical Information of China (English)

    Fu-Song Yuan; Yu-Chun Sun; Yong Wang; Pei-Jun Lu

    2013-01-01

    The article introduces a new method for three-dimensional reproduction of edentulous dental casts, and wax occlusion rims with jaw relation by using a commercial high-speed line laser scanner and reverse engineering software and evaluates the method’s accuracy in vitro. The method comprises three main steps:(i) acquisition of the three-dimensional stereolithography data of maxillary and mandibular edentulous dental casts and wax occlusion rims;(ii) acquisition of the three-dimensional stereolithography data of jaw relations;and (iii) registration of these data with the reverse engineering software and completing reconstruction. To evaluate the accuracy of this method, dental casts and wax occlusion rims of 10 edentulous patients were used. The lengths of eight lines between common anatomic landmarks were measured directly on the casts and occlusion rims by using a vernier caliper and on the three-dimensional computerized images by using the software measurement tool. The direct data were considered as the true values. The paired-samples t-test was used for statistical analysis. The mean differences between the direct and the computerized measurements were mostly less than 0.04 mm and were not significant (P.0.05). Statistical significance among 10 patients was assessed using one-way analysis of variance (P,0.05). The result showed that the 10 patients were considered statistically no significant. Therefore, accurate three-dimensional reproduction of the edentulous dental casts, wax occlusion rims, and jaw relations was achieved. The proposed method enables the visualization of occlusion from different views and would help to meet the demand for the computer-aided design of removable complete dentures.

  2. Prediction and compensation of magnetic beam deflection in MR-integrated proton therapy: a method optimized regarding accuracy, versatility and speed

    Science.gov (United States)

    Schellhammer, Sonja M.; Hoffmann, Aswin L.

    2017-02-01

    The integration of magnetic resonance imaging (MRI) and proton therapy for on-line image-guidance is expected to reduce dose delivery uncertainties during treatment. Yet, the proton beam experiences a Lorentz force induced deflection inside the magnetic field of the MRI scanner, and several methods have been proposed to quantify this effect. We analyze their structural differences and compare results of both analytical and Monte Carlo models. We find that existing analytical models are limited in accuracy and applicability due to critical approximations, especially including the assumption of a uniform magnetic field. As Monte Carlo simulations are too time-consuming for routine treatment planning and on-line plan adaption, we introduce a new method to quantify and correct for the beam deflection, which is optimized regarding accuracy, versatility and speed. We use it to predict the trajectory of a mono-energetic proton beam of energy E 0 traversing a water phantom behind an air gap within an omnipresent uniform transverse magnetic flux density B 0. The magnetic field induced dislocation of the Bragg peak is calculated as function of E 0 and B 0 and compared to results obtained with existing analytical and Monte Carlo methods. The deviation from the Bragg peak position predicted by Monte Carlo simulations is smaller for the new model than for the analytical models by up to 2 cm. The model is faster than Monte Carlo methods, less assumptive than the analytical models and applicable to realistic magnetic fields. To compensate for the predicted Bragg peak dislocation, a numerical optimization strategy is introduced and evaluated. It includes an adjustment of both the proton beam entrance angle and energy of up to 25° and 5 MeV, depending on E 0 and B 0. This strategy is shown to effectively reposition the Bragg peak to its intended location in the presence of a magnetic field.

  3. Prediction and compensation of magnetic beam deflection in MR-integrated proton therapy: a method optimized regarding accuracy, versatility and speed.

    Science.gov (United States)

    Schellhammer, Sonja M; Hoffmann, Aswin L

    2017-02-21

    The integration of magnetic resonance imaging (MRI) and proton therapy for on-line image-guidance is expected to reduce dose delivery uncertainties during treatment. Yet, the proton beam experiences a Lorentz force induced deflection inside the magnetic field of the MRI scanner, and several methods have been proposed to quantify this effect. We analyze their structural differences and compare results of both analytical and Monte Carlo models. We find that existing analytical models are limited in accuracy and applicability due to critical approximations, especially including the assumption of a uniform magnetic field. As Monte Carlo simulations are too time-consuming for routine treatment planning and on-line plan adaption, we introduce a new method to quantify and correct for the beam deflection, which is optimized regarding accuracy, versatility and speed. We use it to predict the trajectory of a mono-energetic proton beam of energy E 0 traversing a water phantom behind an air gap within an omnipresent uniform transverse magnetic flux density B 0. The magnetic field induced dislocation of the Bragg peak is calculated as function of E 0 and B 0 and compared to results obtained with existing analytical and Monte Carlo methods. The deviation from the Bragg peak position predicted by Monte Carlo simulations is smaller for the new model than for the analytical models by up to 2 cm. The model is faster than Monte Carlo methods, less assumptive than the analytical models and applicable to realistic magnetic fields. To compensate for the predicted Bragg peak dislocation, a numerical optimization strategy is introduced and evaluated. It includes an adjustment of both the proton beam entrance angle and energy of up to 25° and 5 MeV, depending on E 0 and B 0. This strategy is shown to effectively reposition the Bragg peak to its intended location in the presence of a magnetic field.

  4. 舰艇声呐测向精度试验方法%Research on warship sonar direction finding accuracy test method

    Institute of Scientific and Technical Information of China (English)

    刘千里

    2012-01-01

    Indicators to direction finding accuracy of sonar system is a very important warship sonar system of tactical and technical index. Sonar system as an integral part of the combat systems, it provides precision measuring data size, is directly related to the command and control system to torpedo firing solution calculates precision, thus affecting the torpedo firing effect. From theoretic clear direction finding precision index describes the problem, then based on the sonar direction finding system error calibration, accuracy of direction finding sea test route agreement analysis, research and present a modern sonar direction finding precision requirement of high accuracy of finding the true value calculation method, and the arithmetic errors are analyzed in detail.%声呐系统测向精度指标是舰艇声呐系统的一个十分重要的战技指标.声呐系统作为舰艇作战系统的一个重要组成部分,它所提供测量数据精度的大小,直接关系到指控系统对鱼雷射击诸元的解算精度,从而影响鱼雷的射击效果.从理论上明确测向精度指标的描述问题,然后通过对声呐测向系统误差的校准,测向精度海上试验的航路约定的分析,研究并提出一种适应现代声呐高测向精度指标要求的测向精度真值解算方法,并对算法的误差进行详细分析.

  5. Application of the MT6070iH to the Battery Management System%MT6070iH在电池管理系统中的应用

    Institute of Scientific and Technical Information of China (English)

    喻超

    2012-01-01

    This paper introduces touch-sensitive screen MT6000, and describes the battery management system. In order to design human-machine interface (HMI) of battery management system, touch-sensitive screen MT6070iH is used. The design method of MT6070iH is discussed through the example.%本文对触摸屏MT6000系列进行了介绍,并描述了电池管理系统组成及触摸屏MT6070iH在该系统中的应用,通过实例讨论了MT6070iH的设计开发方法。

  6. Accuracy of the thermal neutron absorption cross section measurements (based on examples of selected pulsed beam methods); Dokladnosc pomiarow przekroju czynnego absorpcji neutronow termicznych (na przykladzie wybranych metod impulsowych)

    Energy Technology Data Exchange (ETDEWEB)

    Krynicka, E. [The H. Niewodniczanski Inst. of Nuclear Physics, Cracow (Poland)

    1997-12-31

    The problem of accuracy of the thermal neutron macroscopic absorption cross section determination is discussed on examples of selected measurement methods which use non-stationary neutron fields. The computer simulation method elaborated by the author is presented as a procedure for estimating the standard deviation of the measured absorption cross section. The computer simulation method presented can be easily utilized to estimate the accuracy of measurement of various physical magnitudes. (author) 46 refs, 3 figs, 1 tab

  7. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Science.gov (United States)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.

  8. Gravity Compensation Methods for High-Accuracy INS%高精度惯性导航系统重力补偿方法

    Institute of Scientific and Technical Information of China (English)

    陆志东; 王晶

    2016-01-01

    重力扰动是高精度惯性导航系统的一项重要误差源,重力补偿是降低惯性导航系统误差的一项关键技术。通过基于对确定性模型、数据及外部辅助3种重力补偿方法的分析和研究,总结得出了各补偿方法的研究方向以及适用范围,对进一步提高我国未来更高精度惯性导航系统的导航精度具有指导意义,并提供了方法上的参考。%Gravity disturbance is an important error source of high-accuracy inertial navigation system(INS). Gravity compensation is a critical technology to reduce error of INS. After analysing and researching deterministic model,data and external assist gravity compensation methods,research area and range of application were summarized. It can provide guidance and reference to improve INS’s navigation accuracy.

  9. 提高基于北斗卫星无源定位精度的方法%Method of improving passive positioning accuracy based on Beidou satellite

    Institute of Scientific and Technical Information of China (English)

    张瑜; 刘莹; 贺秋瑞

    2013-01-01

    Beidou satellite has been used as external illuminator due to its features such as continuous coverage to our country, less motion relative to the earth, unapparent Doppler frequency, simple disturbance of adjacent channel and high security. Taking account of the location error caused by atmospheric refraction, a location error correction method was given, to further improve the position accuracy. The simulation results show that radar position errors could reduce correspondingly with the increase of elevation or the decrease of target altitude. Passive radar position accuracy is improved greatly by correcting atmospheric refraction.%鉴于我国的北斗卫星具有可对我国实现连续覆盖、产生的多普勒频移不明显、邻近信道的干扰单一、安全性高等特征,利用北斗卫星作为机会辐射源进行无源定位.考虑大气折射引起的定位误差,提出一种定位误差修正方法,使定位精度进一步提高.仿真结果表明,随着仰角增大或目标高度减小,雷达定位误差也相应减小.经大气折射误差修正后的无源雷达定位精度大大提高.

  10. A study of the speed and the accuracy of the Boundary Element Method as applied to the computational simulation of biological organs

    CERN Document Server

    P, Kirana Kumara

    2013-01-01

    In this work, first a Fortran code is developed for three dimensional linear elastostatics using constant boundary elements; the code is based on a MATLAB code developed by the author earlier. Next, the code is parallelized using BLACS, MPI, and ScaLAPACK. Later, the parallelized code is used to demonstrate the usefulness of the Boundary Element Method (BEM) as applied to the realtime computational simulation of biological organs, while focusing on the speed and accuracy offered by BEM. A computer cluster is used in this part of the work. The commercial software package ANSYS is used to obtain the `exact' solution against which the solution from BEM is compared; analytical solutions, wherever available, are also used to establish the accuracy of BEM. A pig liver is the biological organ considered. Next, instead of the computer cluster, a Graphics Processing Unit (GPU) is used as the parallel hardware. Results indicate that BEM is an interesting choice for the simulation of biological organs. Although the use ...

  11. Detection of walking periods and number of steps in older adults and patients with Parkinson's disease: accuracy of a pedometer and an accelerometry-based method.

    Science.gov (United States)

    Dijkstra, Baukje; Zijlstra, Wiebren; Scherder, Erik; Kamsma, Yvo

    2008-07-01

    The aim of this study was to examine if walking periods and number of steps can accurately be detected by a single small body-fixed device in older adults and patients with Parkinson's disease (PD). Results of an accelerometry-based method (DynaPort MicroMod) and a pedometer (Yamax Digi-Walker SW-200) worn on each hip were evaluated against video observation. Twenty older adults and 32 PD patients walked straight-line trajectories at different speeds, of different lengths and while doing secondary tasks in an indoor hallway. Accuracy of the instruments was expressed as absolute percentage error (older adults versus PD patients). Based on the video observation, a total of 236.8 min of gait duration and 24,713 steps were assessed. The DynaPort method predominantly overestimated gait duration (10.7 versus 11.1%) and underestimated the number of steps (7.4 versus 6.9%). Accuracy decreased significantly as walking distance decreased. Number of steps were also mainly underestimated by the pedometers, the left Yamax (6.8 versus 11.1%) being more accurate than the right Yamax (11.1 versus 16.3%). Step counting of both pedometers was significantly less accurate for short trajectories (3 or 5 m) and as walking pace decreased. It is concluded that the Yamax pedometer can be reliably used for this study population when walking at sufficiently high gait speeds (>1.0 m/s). The accelerometry-based method is less speed-dependent and proved to be more appropriate in the PD patients for walking trajectories of 5 m or more.

  12. A physical model of the thermodilution method: influences of the variations of experimental setup on the accuracy of flow rate estimation.

    Science.gov (United States)

    Ozbek, Mustafa; Ozel, H Fehmi; Ekerbiçer, Nuran; Zeren, Tamer

    2011-02-01

    The thermodilution method has been widely used to estimate cardiac output by injecting a cold solution into circulating blood. It is uncertain if radial heat transfer from the vascular/cardiac wall to circulating injectate can cause inaccurate results with this method. In this study, we have introduced a physical experimental model of the thermodilution method without recirculation of the cold solution. To test the accuracy of the thermodilution method, the experimental setup included an aluminum tube to allow radial heat transfer. Variations of the following parameters were conducted: (i) the real flow rate, (ii) the distance between injection point of cold solution and the temperature sensor, (iii) the volume of injectate, and (iv) the temperature of injectate. By following the above variations, we have calculated different correction factors eliminating the influence of radial heat transfer on the estimation of flow rate by the thermodilution method. The results indicate that changes in both injectate temperature and volume have no influence on the estimation of flow rates. The experimental variations, which can cause greater radial heat transfer, seem to be responsible for the result of the smaller estimation of the flow rate than the real value. These variations include (i) a decreased real flow rate and (ii) increased distances between the injection point of cold fluid and the thermosensor. Such an incorrect estimation could be eliminated by using correction factors. The correction factor seems to be a function of the area of the thermodilution curve, assuming no recirculation.

  13. Accuracy of popular automatic QT Interval algorithms assessed by a 'Gold Standard' and comparison with a Novel method: computer simulation study

    Directory of Open Access Journals (Sweden)

    Hunt Anthony

    2005-09-01

    Full Text Available Abstract Background Accurate measurement of the QT interval is very important from a clinical and pharmaceutical drug safety screening perspective. Expert manual measurement is both imprecise and imperfectly reproducible, yet it is used as the reference standard to assess the accuracy of current automatic computer algorithms, which thus produce reproducible but incorrect measurements of the QT interval. There is a scientific imperative to evaluate the most commonly used algorithms with an accurate and objective 'gold standard' and investigate novel automatic algorithms if the commonly used algorithms are found to be deficient. Methods This study uses a validated computer simulation of 8 different noise contaminated ECG waveforms (with known QT intervals of 461 and 495 ms, generated from a cell array using Luo-Rudy membrane kinetics and the Crank-Nicholson method, as a reference standard to assess the accuracy of commonly used QT measurement algorithms. Each ECG contaminated with 39 mixtures of noise at 3 levels of intensity was first filtered then subjected to three threshold methods (T1, T2, T3, two T wave slope methods (S1, S2 and a Novel method. The reproducibility and accuracy of each algorithm was compared for each ECG. Results The coefficient of variation for methods T1, T2, T3, S1, S2 and Novel were 0.36, 0.23, 1.9, 0.93, 0.92 and 0.62 respectively. For ECGs of real QT interval 461 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 379.4(1.29, 368.5(0.8, 401.3(8.4, 358.9(4.8, 381.5(4.6 and 464(4.9 ms respectively. For ECGs of real QT interval 495 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 396.9(1.7, 387.2(0.97, 424.9(8.7, 386.7(2.2, 396.8(2.8 and 493(0.97 ms respectively. These results showed significant differences between means at >95% confidence level. Shifting ECG baselines caused large errors of QT interval with T1 and T2

  14. 77 FR 32896 - Modification of Class E Airspace; Billings, MT

    Science.gov (United States)

    2012-06-04

    ... airspace at Billings Logan International Airport, Billings, MT. Controlled airspace is necessary to... Billings, MT Billings Logan International Airport, MT (Lat. 45 48'28'' N., long. 108 32'34'' W.) That... Federal Aviation Administration 14 CFR Part 71 Modification of Class E Airspace; Billings, MT AGENCY...

  15. 44 CFR 15.3 - Access to Mt. Weather.

    Science.gov (United States)

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Access to Mt. Weather. 15.3... HOMELAND SECURITY GENERAL CONDUCT AT THE MT. WEATHER EMERGENCY ASSISTANCE CENTER AND AT THE NATIONAL EMERGENCY TRAINING CENTER § 15.3 Access to Mt. Weather. Mt. Weather contains classified material and...

  16. Adaptação para a língua portuguesa e validação do Lunney Scoring Method for Rating Accuracy of Nursing Diagnoses Adaptación a la lengua portuguesa y validación del Lunney Scoring Method for Rating Accuracy of Nursing Diagnoses Adaptation to the Portuguese language and validation of the Lunney Scoring Method for Rating Accuracy of Nursing Diagnoses

    Directory of Open Access Journals (Sweden)

    Diná de Almeida Lopes Monteiro da Cruz

    2007-03-01

    Full Text Available O Lunney Scoring Method for Rating Accuracy of Nursing Diagnoses (LSM é uma escala de diferencial semântico que foi desenvolvida por Lunney para estimar a acurácia dos diagnósticos de enfermagem. O objetivo deste estudo foi adaptar o LSM para a língua portuguesa e avaliar as sua propriedades psicométricas. A escala original foi traduzida para o português, revertida para o inglês e as duas versões em inglês foram comparadas para ajustar a versão em português que passou a ser denominada Escala de Acurácia de Diagnóstico de Enfermagem de Lunney - EADE. Quatro enfermeiras foram orientadas sobre a EADE e a aplicaram em 159 diagnósticos formulados para 26 pacientes de três estudos primários com base nos registros de entrevista e exame físico de cada paciente. Os índices Kappa de Cohen mostraram ausência de concordância entre as avaliadoras, o que indica que o instrumento adaptado não tem confiabilidade satisfatória. Em virtude desse resultado, não foi realizada estimativa de validade.O Lunney Scoring Method for Rating Accuracy of Nursing Diagnoses (LSM es una escala de diferencial semántico desarrollada por Lunney para estimar la perfección de los diagnósticos de enfermería. El objetivo de este estudio fue adaptar el LSM a la lengua portuguesa y evaluar sus propiedades psicométricas. La escala original fue traducida al portugués, revertida al inglés y las dos versiones en inglés fueron comparadas para ajustar la versión en portugués que pasó a ser denominada Escala de Perfeccionamiento de Diagnóstico de Enfermería de Lunney - EPDE. Cuatro enfermeras fueron orientadas sobre la EPDE y la aplicaron a 159 diagnósticos formulados para 26 pacientes de 3 estudios primarios con base en los registros de entrevista y examen físico de cada paciente. Los índices Kappa de Cohen mostraron ausencia de concordancia entre las evaluadoras, lo que indica que el instrumento adaptado no tiene confiabilidad satisfactoria. En virtud

  17. Visual area MT in the Cebus monkey: location, visuotopic organization, and variability.

    Science.gov (United States)

    Fiorani, M; Gattass, R; Rosa, M G; Sousa, A P

    1989-09-01

    The representation of the visual field in the dorsal portion of the superior temporal sulcus (ST) was studied by multiunit recordings in eight Cebus apella, anesthetized with N2O and immobilized with pancuronium bromide, in repeated recording sessions. On the basis of visuotopic organization, myeloarchitecture, and receptive field size, area MT was distinguished from its neighboring areas. MT is an oval area of about 70 mm2 located mainly in the posterior bank of the superior temporal sulcus. It contains a visuotopically organized representation of at least the binocular visual field. The representation of the vertical meridian forms the dorsolateral, lateral, and ventrolateral borders of MT and that of the horizontal meridian runs across the posterior bank of ST. The fovea is represented at the lateralmost portion of MT, while the retinal periphery is represented medially. The representation of the central visual field is magnified relative to that of the periphery in MT. The cortical magnification factor in MT decreases with increasing eccentricity following a negative power function. Receptive field size increases with increasing eccentricity. A method to evaluate the scatter of receptive field position in multiunit recordings based on the inverse of the magnification factor is described. In MT, multiunit receptive field scatter increases with increasing eccentricity. As shown by the Heidenhain-Woelcke method, MT is coextensive with two myeloarchitectonically distinct zones: one heavily myelinated, located in the posterior bank of ST, and another, less myelinated, located at the junction of the posterior bank with the anterior bank of ST. At least three additional visual zones surround MT: DZ, MST, and FST. The areas of the dorsal portion of the superior temporal sulcus in the diurnal New World monkey Cebus are comparable to those described for the diurnal Old World monkey, Macaca. This observation suggests that these areas are ancestral characters of the simian

  18. Properties of MT2 in the massless limit

    CERN Document Server

    Lally, Colin H

    2012-01-01

    Although numerical methods are required to evaluate the stransverse mass, MT2, for general input momenta, non-numerical methods have been proposed for some special clases of input momenta. One special case, considered in this note, is the so-called `massless limit' in which all four daughter objects (comprising one invisible particle and one visible system from each `side' of the event) have zero mass. This note establishes that it is possible to construct a stable and accurate implementation for evaluating MT2 based on an analytic expression valid in that massless limit. Although this implementation is found to have no significant speed improvements over existing evaluation strategies, it leads to an unexpected by-product: namely a secondary variable, that is found to be very similar to MT2 for much of its input-space and yet is much faster to calculate. This is potentially of interest for hardware applications that require very fast estimation of a mass scale (or QCD background discriminant) based on a hypo...

  19. The practical method of improve earthquake forecast accuracy by MSDP software%MSDP软件提高地震速报质量

    Institute of Scientific and Technical Information of China (English)

    苏莉华; 赵晖; 李源; 魏玉霞

    2012-01-01

    Select the records of Henan digital seismic network within the network and outside the network (the sidelines within 100 km) of seismic events from 2008 to 2011. Analysis and comparison those records by MSDP software, and coordinate with the daily experience, generalize the practical method of improve earthquake forecast accuracy.%选取2008-2011年河南数字地震台网记录的网内和网外(边线外100 km以内)的地震事件,运用MSDP软件对这些震例进行实际分析对比,再结合日常的工作经验,从而归纳出提高地震速报质量的实用方法.

  20. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    Science.gov (United States)

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types.

  1. Investigation of the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection method: a phantom study

    Science.gov (United States)

    Abuhadi, Nouf; Bradley, David; Katarey, Dev; Podolyak, Zsolt; Sassi, Salem

    2014-03-01

    Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor

  2. The reliability and accuracy of two methods for proximal caries detection and depth on directly visible proximal surfaces: an in vitro study.

    Science.gov (United States)

    Ekstrand, K R; Luna, L E; Promisiero, L; Cortes, A; Cuevas, S; Reyes, J F; Torres, C E; Martignon, S

    2011-01-01

    This study aimed to determine the reliability and accuracy of the ICDAS and radiographs in detecting and estimating the depth of proximal lesions on extracted teeth. The lesions were visible to the naked eye. Three trained examiners scored a total of 132 sound/carious proximal surfaces from 106 primary teeth and 160 sound/carious proximal surfaces from 140 permanent teeth. The selected surfaces were first scored visually, using the 7 classes in the ICDAS. They were then assessed on radiographs using a 5-point classification system. Reexaminations were conducted with both scoring systems. Teeth were then sectioned and the selected surfaces histologically classified using a stereomicroscope (×5). Intrareproducibility values (weighted kappa statistics) for the ICDAS for both primary and permanent teeth were >0.9, and for the radiographs between 0.6 and 0.8. Interreproducibility values for the ICDAS were >0.85, for the radiographs >0.6. For both primary and permanent teeth, the accuracy of each examiner (Spearman's correlation coefficient) for the ICDAS was ≥0.85, and for the radiographs ≥0.45. Corresponding data were achieved when using pooled data from the 3 examiners for both the ICDAS and the radiographs. The associations between the 2 detection methods were measured to be moderate. In particular, the ICDAS was accurate in predicting lesion depth (histologically) confined to the enamel/outer third of the dentine versus deeper lesions. This study shows that when proximal lesions are open for inspection, the ICDAS is a more reliable and accurate method than the radiograph for detecting and estimating the depth of the lesion in both primary and permanent teeth.

  3. Identification and Correction for MT Static Shift Using TEM Inversion Technique

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The inversion of TEM data, using the observed magnetic fields instead of that of apparent resistivities data in this paper, avoids the errors caused by the definition of the apparent resistivity. The inversed results by fitting the magnetic fields of the transmitter sources image with the observed magnetic fields are relalively less affected by the conductivity inhomogeneity. The MT apparent curve is calculated on the basis of the conductivity model constructed from the TEM inversion results. This curve is used as a reference curve for the correction of MT static shift, which makes the correction more reliable.Meanwhile, the domain transformation is also achieved from time to frequency between the two kinds of electromaguetic data. Therefore, the correction of the MT static shift is actualized using TEM inversion method, The corresponding application research shows that this method is very effective for the identification and correction of the MT static shift.``

  4. The bias, accuracy and precision of faecal egg count reduction test results in cattle using McMaster, Cornell-Wisconsin and FLOTAC egg counting methods.

    Science.gov (United States)

    Levecke, B; Rinaldi, L; Charlier, J; Maurelli, M P; Bosco, A; Vercruysse, J; Cringoli, G

    2012-08-13

    The faecal egg count reduction test (FECRT) is the recommended method to monitor anthelmintic drug efficacy in cattle. There is a large variation in faecal egg count (FEC) methods applied to determine FECRT. However, it remains unclear whether FEC methods with an equal analytic sensitivity, but with different methodologies, result in equal FECRT results. We therefore, compared the bias, accuracy and precision of FECRT results for Cornell-Wisconsin (analytic sensitivity = 1 egg per gram faeces (EPG)), FLOTAC (analytic sensitivity = 1 EPG) and McMaster method (analytic sensitivity = 10 EPG) across four levels of egg excretion (1-49 EPG; 50-149 EPG; 150-299 EPG; 300-600 EPG). Finally, we assessed the sensitivity of the FEC methods to detect a truly reduced efficacy. To this end, two different criteria were used to define reduced efficacy based on FECR, including those described in the WAAVP guidelines (FECRT egg excretion increased, this effect was greatest for McMaster and least for Cornell-Wisconsin. The sensitivity of the three methods to detect a truly reduced efficacy was high (>90%). Yet, the sensitivity of McMaster and Cornell-Wisconsin may drop when drugs only show sub-optimal efficacy. Overall, the study indicates that the precision of FECRT is affected by the methodology of FEC, and that the level of egg excretion should be considered in the final interpretation of the FECRT. However, more comprehensive studies are required to provide more insights into the complex interplay of factors inherent to study design (sample size and FEC method) and host-parasite interactions (level of egg excretion and aggregation across the host population). Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Chemical exchange saturation transfer MR imaging of articular cartilage glycosaminoglycans at 3 T: Accuracy of B0 Field Inhomogeneity corrections with gradient echo method.

    Science.gov (United States)

    Wei, Wenbo; Jia, Guang; Flanigan, David; Zhou, Jinyuan; Knopp, Michael V

    2014-01-01

    Glycosaminoglycan Chemical Exchange Saturation Transfer (gagCEST) is an important molecular MRI methodology developed to assess changes in cartilage GAG concentrations. The correction for B0 field inhomogeneity is technically crucial in gagCEST imaging. This study evaluates the accuracy of the B0 estimation determined by the dual gradient echo method and the effect on gagCEST measurements. The results were compared with those from the commonly used z-spectrum method. Eleven knee patients and three healthy volunteers were scanned. Dual gradient echo B0 maps with different ∆TE values (1, 2, 4, 8, and 10 ms) were acquired. The asymmetry of the magnetization transfer ratio at 1 ppm offset referred to the bulk water frequency, MTRasym(1 ppm), was used to quantify cartilage GAG levels. The B0 shifts for all knee patients using the z-spectrum and dual gradient echo methods are strongly correlated for all ∆TE values used (r = 0.997 to 0.786, corresponding to ∆TE = 10 to 1 ms). The corrected MTRasym(1 ppm) values using the z-spectrum method (1.34% ± 0.74%) highly agree only with those using the dual gradient echo methods with ∆TE = 10 ms (1.72% ± 0.80%; r = 0.924) and 8 ms (1.50% ± 0.82%; r = 0.712). The dual gradient echo method with longer ∆TE values (more than 8 ms) has an excellent correlation with the z-spectrum method for gagCEST imaging at 3T.

  6. Human gastroenteropancreatic expression of melatonin and its receptors MT1 and MT2.

    Directory of Open Access Journals (Sweden)

    Fanny Söderquist

    Full Text Available The largest source of melatonin, according to animal studies, is the gastrointestinal (GI tract but this is not yet thoroughly characterized in humans. This study aims to map the expression of melatonin and its two receptors in human GI tract and pancreas using microarray analysis and immunohistochemistry.Gene expression data from normal intestine and pancreas and inflamed colon tissue due to ulcerative colitis were analyzed for expression of enzymes relevant for serotonin and melatonin production and their receptors. Sections from paraffin-embedded normal tissue from 42 individuals, representing the different parts of the GI tract (n=39 and pancreas (n=3 were studied with immunohistochemistry using antibodies with specificity for melatonin, MT1 and MT2 receptors and serotonin.Enzymes needed for production of melatonin are expressed in both GI tract and pancreas tissue. Strong melatonin immunoreactivity (IR was seen in enterochromaffin (EC cells partially co-localized with serotonin IR. Melatonin IR was also seen in pancreas islets. MT1 and MT2 IR were both found in the intestinal epithelium, in the submucosal and myenteric plexus, and in vessels in the GI tract as well as in pancreatic islets. MT1 and MT2 IR was strongest in the epithelium of the large intestine. In the other cell types, both MT2 gene expression and IR were generally elevated compared to MT1. Strong MT2, IR was noted in EC cells but not MT1 IR. Changes in gene expression that may result in reduced levels of melatonin were seen in relation to inflammation.Widespread gastroenteropancreatic expression of melatonin and its receptors in the GI tract and pancreas is in agreement with the multiple roles ascribed to melatonin, which include regulation of gastrointestinal motility, epithelial permeability as well as enteropancreatic cross-talk with plausible impact on metabolic control.

  7. Diagnostic accuracy of ELISA methods as an alternative screening test to indirect immunofluorescence for the detection of antinuclear antibodies. Evaluation of five commercial kits.

    Science.gov (United States)

    Tonuttia, Elio; Bassetti, Danila; Piazza, Anna; Visentini, Daniela; Poletto, Monica; Bassetto, Franca; Caciagli, Patrizio; Villalta, Danilo; Tozzoli, Renato; Bizzaro, Nicola

    2004-03-01

    Detection of antinuclear antibodies (ANA) is a fundamental laboratory test for diagnosing systemic autoimmune diseases. Currently, the method of choice is indirect immunofluorescence (IIF) on a HEp-2 cell substrate. The goal of this study was to evaluate the diagnostic accuracy of five commercially available enzyme immunoassay (EIA) kits for ANA detection and to verify the possibility of using them as an alternative to the IIF method. The study involved 1513 patients, 315 of whom were diagnosed with a systemic autoimmune disease and 1198 in whom an autoimmune disorder was excluded. For all sera, ANA detection was performed via IIF and with five different EIA kits. The results were evaluated in relation to clinical diagnosis and the presence of possible specific autoantibodies (anti-ENA or anti-dsDNA); lastly, they were compared with the results obtained using ANA-IIF as the method of reference. The positive rate of the ANA-IIF test in subjects with systemic autoimmune diseases was 92%, whereas in the five ANA-EIA kits there was broad diversity in terms of response, with positive rates ranging from 74 to 94%. All the EIA kits correctly detected the presence of antibodies (anti-dsDNA, anti-RNP, anti-Ro/SSA) responsible for homogeneous and speckled fluorescence pattern, but at the same time they showed substantial inaccuracy with the nucleolar pattern, with a mean sensitivity of approximately 50% in this case. Instead, there was a large kit-to-kit difference in terms of identification of anti-Scl70 and centromere patterns, for which sensitivities ranged between 45 and 91%, and between 49 and 100%, respectively. The results of the study demonstrate that the commercially available ANA-EIA kits show different levels of sensitivity and specificity. Some of them have a diagnostic accuracy that is comparable and, in some cases, even higher than the IIF method. Consequently, these could be used as an alternative screening test to IIE. However, others do not ensure acceptable

  8. STELLAR COLOR REGRESSION: A SPECTROSCOPY-BASED METHOD FOR COLOR CALIBRATION TO A FEW MILLIMAGNITUDE ACCURACY AND THE RECALIBRATION OF STRIPE 82

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Haibo; Liu, Xiaowei [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); Xiang, Maosheng; Huang, Yang; Zhang, Huihua; Chen, Bingqiu, E-mail: yuanhb4861@pku.edu.cn, E-mail: x.liu@pku.edu.cn [Department of Astronomy, Peking University, Beijing 100871 (China)

    2015-02-01

    In this paper we propose a spectroscopy-based stellar color regression (SCR) method to perform accurate color calibration for modern imaging surveys, taking advantage of millions of stellar spectra now available. The method is straightforward, insensitive to systematic errors in the spectroscopically determined stellar atmospheric parameters, applicable to regions that are effectively covered by spectroscopic surveys, and capable of delivering an accuracy of a few millimagnitudes for color calibration. As an illustration, we have applied the method to the Sloan Digital Sky Survey (SDSS) Stripe 82 data. With a total number of 23,759 spectroscopically targeted stars, we have mapped out the small but strongly correlated color zero-point errors present in the photometric catalog of Stripe 82, and we improve the color calibration by a factor of two to three. Our study also reveals some small but significant magnitude dependence errors in the z band for some charge-coupled devices (CCDs). Such errors are likely to be present in all the SDSS photometric data. Our results are compared with those from a completely independent test based on the intrinsic colors of red galaxies presented by Ivezić et al. The comparison, as well as other tests, shows that the SCR method has achieved a color calibration internally consistent at a level of about 5 mmag in u – g, 3 mmag in g – r, and 2 mmag in r – i and i – z. Given the power of the SCR method, we discuss briefly the potential benefits by applying the method to existing, ongoing, and upcoming imaging surveys.

  9. Data accuracy assessment using enterprise architecture

    Science.gov (United States)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  10. Evidence of animal mtDNA recombination between divergent populations of the potato cyst nematode Globodera pallida.

    Science.gov (United States)

    Hoolahan, Angelique H; Blok, Vivian C; Gibson, Tracey; Dowton, Mark

    2012-03-01

    Recombination is typically assumed to be absent in animal mitochondrial genomes (mtDNA). However, the maternal mode of inheritance means that recombinant products are indistinguishable from their progenitor molecules. The majority of studies of mtDNA recombination assess past recombination events, where patterns of recombination are inferred by comparing the mtDNA of different individuals. Few studies assess contemporary mtDNA recombination, where recombinant molecules are observed as direct mosaics of known progenitor molecules. Here we use the potato cyst nematode, Globodera pallida, to investigate past and contemporary recombination. Past recombination was assessed within and between populations of G. pallida, and contemporary recombination was assessed in the progeny of experimental crosses of these populations. Breeding of genetically divergent organisms may cause paternal mtDNA leakage, resulting in heteroplasmy and facilitating the detection of recombination. To assess contemporary recombination we looked for evidence of recombination between the mtDNA of the parental populations within the mtDNA of progeny. Past recombination was detected between a South American population and several UK populations of G. pallida, as well as between two South American populations. This suggests that these populations may have interbred, paternal mtDNA leakage occurred, and the mtDNA of these populations subsequently recombined. This evidence challenges two dogmas of animal mtDNA evolution; no recombination and maternal inheritance. No contemporary recombination between the parental populations was detected in the progeny of the experimental crosses. This supports current arguments that mtDNA recombination events are rare. More sensitive detection methods may be required to adequately assess contemporary mtDNA recombination in animals.

  11. Accuracy Evaluation Method for Electromechanical-electromagnetic Hybrid Simulation%机电-电磁混合仿真精度评估方法研究

    Institute of Scientific and Technical Information of China (English)

    房钊; 陶顺; 杨洋; 陈鹏伟; 肖湘宁

    2016-01-01

    Electromechanical-electromagnetic hybrid simulation can both do electromechanical transient simulation for the large-scale complex power grid and do electromagnetic transient simulation for a local network. It achieves harmonization on simulation scale and simulation accuracy. Whether the simulation results can accurately reflect the real situation will determine the reliability of the simulation results. This paper proposes a multiple time quantum, multiple locations and multiple variables three-level accuracy evaluation system, which can give quantitative value and qualitative interpretation for electromechanical-electromagnetic hybrid simulation. First of all, this paper introduces the basic principles and the relevant indicators of two typical feature extraction algorithms of power system simulation, which are prony method and feature selective validation. Then taking into account the properties of each object observations in electromechanical-electromagnetic hybrid simulation, the object observation is divided into three categories: electromagnetic system observables, the interface state observables and electromechanical system observables. By combining the two methods, a hierarchical accuracy evaluation system is set up. Finally, through evaluating the differences between the results of the two current sources hybrid simulation and full electromagnetic simulation, the applicability of the proposed accuracy evaluation system is verified.%机电-电磁混合仿真既能够对大规模复杂电网进行机电暂态仿真,也可对局部网络进行电磁暂态仿真,实现仿真规模和仿真精度的协调统一,其仿真结果反映真实情况的程度,决定仿真结果的可信度,通过一种多时间、多位置和多物理量的3 级精度评估体系,定量和定性描述机电-电磁混合仿真结果的精度.首先,介绍 Prony 分析法和特征选择验证这两种典型特征量提取算法的基本原理及相关指标;然后,根据机电-电

  12. Network time transfer accuracy improvement with Kalman filter method%提高网络授时精度的Kalman滤波方法

    Institute of Scientific and Technical Information of China (English)

    邢开亮; 尹义蓉; 黄永华; 高勇

    2011-01-01

    In communication networks, the transmission of coordinated universal time ( UTC) is always influenced by delay, hence the accuracy is reduced. In this paper, a correction method based on Kalman filter has been proposed. The delay of networks is divided into three parts, which include fixed delay, jitter delay and burst delay.Kalman filter is employed to reduce the jitter delay, while a tracking door is used for burst delay reduction. Therefore , the delay of networks wiU approach to the fixed delay and the accuracy is improved. In order to verify the accuracy improvement effect of Kalman filter, the SimEvents expansion module in MATLAB is used to build a terminalto-terminal computer network transmission model. When the algorithm works steadily, the accuracy of the estimated time delay can be improved by 100 times compared with the measured time delay. A real-time test platform was designed in LabVIEW, which embeds the MATLAB program. Real-time communication network delay was obtained and then filtered. Results indicate that the proposed correction method is effective through the verification of the actual data in both wireless and cable networks.%在通信网络中传输标准时间往往受到时延很大的影响,造成精度不高.为了提高网络授时精度,提出一种基于卡尔曼滤波器的修正方法,将网络时延分解为固有延时、抖动时延、突发时延3个部分,利用Kalman滤波器减小抖动时延,利用跟踪门消除突发时延,从而使网络时延更加逼近固有时延,有利于提高授时精度.在MATLAB中运用SimEvents扩展模块,搭建一个端到端的计算机网络传输模型,验证了卡尔曼滤波对网络时延精度提高的效果,算法在稳定后,时延的估计值与测量值相比,精度可以提高大约100倍.在此基础上,利用LabVIEW设计了一个实时测试平台,嵌入MATLAB程序,实时获取通信网络时延并进行滤波处理.通过实际有线和无线网络测量的时延数据的验

  13. mtDNA-Server: next-generation sequencing data analysis of human mitochondrial DNA in the cloud

    Science.gov (United States)

    Weissensteiner, Hansi; Forer, Lukas; Fuchsberger, Christian; Schöpf, Bernd; Kloss-Brandstätter, Anita; Specht, Günther; Kronenberg, Florian; Schönherr, Sebastian

    2016-01-01

    Next generation sequencing (NGS) allows investigating mitochondrial DNA (mtDNA) characteristics such as heteroplasmy (i.e. intra-individual sequence variation) to a higher level of detail. While several pipelines for analyzing heteroplasmies exist, issues in usability, accuracy of results and interpreting final data limit their usage. Here we present mtDNA-Server, a scalable web server for the analysis of mtDNA studies of any size with a special focus on usability as well as reliable identification and quantification of heteroplasmic variants. The mtDNA-Server workflow includes parallel read alignment, heteroplasmy detection, artefact or contamination identification, variant annotation as well as several quality control metrics, often neglected in current mtDNA NGS studies. All computational steps are parallelized with Hadoop MapReduce and executed graphically with Cloudgene. We validated the underlying heteroplasmy and contamination detection model by generating four artificial sample mix-ups on two different NGS devices. Our evaluation data shows that mtDNA-Server detects heteroplasmies and artificial recombinations down to the 1% level with perfect specificity and outperforms existing approaches regarding sensitivity. mtDNA-Server is currently able to analyze the 1000G Phase 3 data (n = 2,504) in less than 5 h and is freely accessible at https://mtdna-server.uibk.ac.at. PMID:27084948

  14. mtDNA-Server: next-generation sequencing data analysis of human mitochondrial DNA in the cloud.

    Science.gov (United States)

    Weissensteiner, Hansi; Forer, Lukas; Fuchsberger, Christian; Schöpf, Bernd; Kloss-Brandstätter, Anita; Specht, Günther; Kronenberg, Florian; Schönherr, Sebastian

    2016-07-08

    Next generation sequencing (NGS) allows investigating mitochondrial DNA (mtDNA) characteristics such as heteroplasmy (i.e. intra-individual sequence variation) to a higher level of detail. While several pipelines for analyzing heteroplasmies exist, issues in usability, accuracy of results and interpreting final data limit their usage. Here we present mtDNA-Server, a scalable web server for the analysis of mtDNA studies of any size with a special focus on usability as well as reliable identification and quantification of heteroplasmic variants. The mtDNA-Server workflow includes parallel read alignment, heteroplasmy detection, artefact or contamination identification, variant annotation as well as several quality control metrics, often neglected in current mtDNA NGS studies. All computational steps are parallelized with Hadoop MapReduce and executed graphically with Cloudgene. We validated the underlying heteroplasmy and contamination detection model by generating four artificial sample mix-ups on two different NGS devices. Our evaluation data shows that mtDNA-Server detects heteroplasmies and artificial recombinations down to the 1% level with perfect specificity and outperforms existing approaches regarding sensitivity. mtDNA-Server is currently able to analyze the 1000G Phase 3 data (n = 2,504) in less than 5 h and is freely accessible at https://mtdna-server.uibk.ac.at.

  15. Earthworm Lumbricus rubellus MT-2: Metal Binding and Protein Folding of a True Cadmium-MT

    Directory of Open Access Journals (Sweden)

    Gregory R. Kowald

    2016-01-01

    Full Text Available Earthworms express, as most animals, metallothioneins (MTs—small, cysteine-rich proteins that bind d10 metal ions (Zn(II, Cd(II, or Cu(I in clusters. Three MT homologues are known for Lumbricus rubellus, the common red earthworm, one of which, wMT-2, is strongly induced by exposure of worms to cadmium. This study concerns composition, metal binding affinity and metal-dependent protein folding of wMT-2 expressed recombinantly and purified in the presence of Cd(II and Zn(II. Crucially, whilst a single Cd7wMT-2 species was isolated from wMT-2-expressing E. coli cultures supplemented with Cd(II, expressions in the presence of Zn(II yielded mixtures. The average affinities of wMT-2 determined for either Cd(II or Zn(II are both within normal ranges for MTs; hence, differential behaviour cannot be explained on the basis of overall affinity. Therefore, the protein folding properties of Cd- and Zn-wMT-2 were compared by 1H NMR spectroscopy. This comparison revealed that the protein fold is better defined in the presence of cadmium than in the presence of zinc. These differences in folding and dynamics may be at the root of the differential behaviour of the cadmium- and zinc-bound protein in vitro, and may ultimately also help in distinguishing zinc and cadmium in the earthworm in vivo.

  16. Revisiting Combinatorial Ambiguities at Hadron Colliders with MT2

    CERN Document Server

    Baringer, Philip; McCaskey, Mathew; Noonan, Daniel

    2011-01-01

    We present a method to resolve combinatorial issues in multi-particle final states at hadron colliders. The use of kinematic variables such as MT2 and invariant mass significantly reduces combinatorial ambiguities in the signal, but at a cost of losing statistics. We illustrate this idea with gluino pair production leading to 4 jets $+\\met$ in the final state as well as $t\\bar{t}$ production in the dilepton channel. Compared to results in recent studies, our method provides greater efficiency with similar purity

  17. Synthetic Modeling of A Geothermal System Using Audio-magnetotelluric (AMT) and Magnetotelluric (MT)

    Science.gov (United States)

    Mega Saputra, Rifki; Widodo

    2017-04-01

    Indonesia has 40% of the world’s potential geothermal resources with estimated capacity of 28,910 MW. Generally, the characteristic of the geothermal system in Indonesia is liquid-dominated systems, which driven by volcanic activities. In geothermal exploration, electromagnetic methods are used to map structures that could host potential reservoirs and source rocks. We want to know the responses of a geothermal system using synthetic data of Audio-magnetotelluric (AMT) and Magnetotelluric (MT). Due to frequency range, AMT and MT data can resolve the shallow and deeper structure, respectively. 1-D models have been performed using AMT and MT data. The results indicate that AMT and MT data give detailed conductivity distribution of geothermal structure.

  18. Accuracy of navigation-guided socket drilling before implant installation compared to the conventional free-hand method in a synthetic edentulous lower jaw model.

    Science.gov (United States)

    Hoffmann, Jürgen; Westendorff, Carsten; Gomez-Roman, German; Reinert, Siegmar

    2005-10-01

    In this study, the three-dimensional (3D) accuracy of navigation-guided (NG) socket drilling before implant installation was compared to the conventional free-hand (CF) method in a synthetic edentulous lower jaw model. The drillings were performed by two surgeons with different years of working experience. The inter-individual outcome was assessed. NG drillings were performed using an optical computerized tomography (CT)-based navigation system. CF drillings were performed using a surgical template. The coordinates of the drilled sockets were determined on the basis of CT scans. A total of n=224 drillings was evaluated. Inter-individual differences in terms of the surgeons' years of work experience were without statistical significance. The mean deviation of the CF drilled sockets (n=112) on the vestibulo-oral and mesio-distal direction was 11.2+/-5.6 degrees (range: 4.1-25.3 degrees ). With respect to the NG drilled sockets (n=112), the mean deviation was 4.2+/-1.8 degrees (range: 2.3-11.5). The mean distance to the mandibular canal was 1.1+/-0.6 mm (range: 0.1-2.3 mm) for CF-drilled sockets and 0.7+/-0.5 mm (range: 0.1-1.8 mm) for NG drilled sockets. The differences between the two methods were highly significant (P<0.01). A potential benefit from image-data-based navigation in implant surgery is discussed against the background of cost-effectiveness.

  19. Ancient mtDNA genetic variants modulate mtDNA transcription and replication.

    Directory of Open Access Journals (Sweden)

    Sarit Suissa

    2009-05-01

    Full Text Available Although the functional consequences of mitochondrial DNA (mtDNA genetic backgrounds (haplotypes, haplogroups have been demonstrated by both disease association studies and cell culture experiments, it is not clear which of the mutations within the haplogroup carry functional implications and which are "evolutionary silent hitchhikers". We set forth to study the functionality of haplogroup-defining mutations within the mtDNA transcription/replication regulatory region by in vitro transcription, hypothesizing that haplogroup-defining mutations occurring within regulatory motifs of mtDNA could affect these processes. We thus screened >2500 complete human mtDNAs representing all major populations worldwide for natural variation in experimentally established protein binding sites and regulatory regions comprising a total of 241 bp in each mtDNA. Our screen revealed 77/241 sites showing point mutations that could be divided into non-fixed (57/77, 74% and haplogroup/sub-haplogroup-defining changes (i.e., population fixed changes, 20/77, 26%. The variant defining Caucasian haplogroup J (C295T increased the binding of TFAM (Electro Mobility Shift Assay and the capacity of in vitro L-strand transcription, especially of a shorter transcript that maps immediately upstream of conserved sequence block 1 (CSB1, a region associated with RNA priming of mtDNA replication. Consistent with this finding, cybrids (i.e., cells sharing the same nuclear genetic background but differing in their mtDNA backgrounds harboring haplogroup J mtDNA had a >2 fold increase in mtDNA copy number, as compared to cybrids containing haplogroup H, with no apparent differences in steady state levels of mtDNA-encoded transcripts. Hence, a haplogroup J regulatory region mutation affects mtDNA replication or stability, which may partially account for the phenotypic impact of this haplogroup. Our analysis thus demonstrates, for the first time, the functional impact of particular mt

  20. Azithromycin assay in drug formulations: Validation of a HPTLC method with a quadratic polynomial calibration model using the accuracy profile approach.

    Science.gov (United States)

    Bouklouze, A; Kharbach, M; Cherrah, Y; Vander Heyden, Y

    2017-03-01

    Many different assaying high performance thin layer chromatography (HPTLC) methods have been developed and validated in order to be used in routine analysis in different analytical fields. Validation often starts by the evaluation of the linearity of the calibration curve. Frequently, if the correlation coefficient is close to one, the linear calibration curve model is considered to be proper to predict the unknown concentration in the sample. But is this simple model effective to assess the behavior of the response of an HPTLC method as a function of concentration. To answer this question, a method for the determination of azithromycin by HPTLC has been developed and validated following both the classical approach and that based on the accuracy profile. Silica gel plates with fluorescence indicator F254 and chloroform - ethanol - 25% ammonia 6:14:0.2 (v/v/v) as mobile phase were used. Analysis was carried out in reflectance mode at 483nm. The RF of azithromycin was 0.53. The validation based on the classical approach, shows that the behavior is not linear, even though r(2)=0.999 because the lack of fit test is significant (Pquadratic regression model, show that the former results is a β-expectation tolerance interval outside the acceptance limits, while with the latter, this interval is within the limits of ±5% acceptability for a range which extends from 0.2 to 1.0μg/zone. With the quadratic model, the method showed to be precise and accurate. Copyright © 2016 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  1. 76 FR 47637 - Montana Disaster #MT-00062

    Science.gov (United States)

    2011-08-05

    ... ADMINISTRATION Montana Disaster MT-00062 AGENCY: U.S. Small Business Administration. ACTION: Notice. SUMMARY: This is a Notice of the Presidential declaration of a major disaster for the State of Montana (FEMA..., Fort Worth, TX 76155. FOR FURTHER INFORMATION CONTACT: A. Escobar, Office of Disaster Assistance,...

  2. 77 FR 47907 - Montana Disaster #MT-00067

    Science.gov (United States)

    2012-08-10

    ... ADMINISTRATION Montana Disaster MT-00067 AGENCY: U.S. Small Business Administration. ACTION: Notice. SUMMARY: This is a notice of an Administrative declaration of a disaster for the State of MONTANA dated 08/02/2012. Incident: Ash Creek Fire. Incident Period: 06/25/2012 through 07/22/2012. Effective Date:...

  3. 77 FR 48198 - Montana Disaster #MT-00068

    Science.gov (United States)

    2012-08-13

    ... ADMINISTRATION Montana Disaster MT-00068 AGENCY: U.S. Small Business Administration. ACTION: Notice. SUMMARY: This is a notice of an Administrative declaration of a disaster for the State of Montana dated 08/06/2012. Incident: Dahl Fire. Incident Period: 06/26/2012 through 07/06/2012. Effective Date:...

  4. Automatic Grader of MT Outputs in Colloquial Style by Using Multiple Edit Distances

    Science.gov (United States)

    Akiba, Yasuhiro; Imamura, Kenji; Sumita, Eiichiro; Nakaiwa, Hiromi; Yamamoto, Seiichi; Okuno, Hiroshi G.

    This paper addresses the challenging problem of automating the human's intelligent ability to evaluate output from machine translation (MT) systems, which are subsystems of Speech-to-Speech MT (SSMT) systems. Conventional automatic MT evaluation methods include BLEU, which MT researchers have frequently used. BLEU is unsuitable for SSMT evaluation for two reasons. First, BLEU assesses errors lightly at the beginning or ending of translations and heavily in the middle, although the assessments should be independent from the positions. Second, BLEU lacks tolerance in accepting colloquial sentences with small errors, although such errors do not prevent us from continuing conversation. In this paper, the authors report a new evaluation method called RED that automatically grades each MT output by using a decision tree (DT). The DT is learned from training examples that are encoded by using multiple edit distances and their grades. The multiple edit distances are normal edit dista nce (ED) defined by insertion, deletion, and replacement, as well as extensions of ED. The use of multiple edit distances allows more tolerance than either ED or BLEU. Each evaluated MT output is assigned a grade by using the DT. RED and BLEU were compared for the task of evaluating SSMT systems, which have various performances, on a spoken language corpus, ATR's Basic Travel Expression Corpus (BTEC). Experimental results showed that RED significantly outperformed BLEU.

  5. Influence of abutment tooth geometry on the accuracy of conventional and digital methods of obtaining dental impressions.

    Science.gov (United States)

    Carbajal Mejía, Jeison B; Wakabayashi, Kazumichi; Nakamura, Takashi; Yatani, Hirofumi

    2017-09-01

    Direct (intraoral) and indirect (desktop) digital scanning can record abutment tooth preparations despite their geometry. However, little peer-reviewed information is available regarding the influence of abutment tooth geometry on the accuracy of digital methods of obtaining dental impressions. The purpose of this in vitro study was to evaluate the influence of abutment tooth geometry on the accuracy of conventional and digital methods of obtaining dental impressions in terms of trueness and precision. Crown preparations with known total occlusal convergence (TOC) angles (-8, -6, -4, 0, 4, 8, 12, 16, and 22 degrees) were digitally created from a maxillary left central incisor and printed in acrylic resin. Each of these 9 reference models was scanned with a highly accurate reference scanner and saved in standard tessellation language (STL) format. Then, 5 conventional polyvinyl siloxane (PVS) impressions were made from each reference model, which was poured with Type IV dental stone scanned using both the reference scanner (group PVS) and the desktop scanner and exported as STL files. Additionally, direct digital impressions (intraoral group) of the reference models were made, and the STL files were exported. The STL files from the impressions obtained were compared with the original geometry of the reference model (trueness) and within each test group (precision). Data were analyzed using 2-way ANOVA with the post hoc least significant difference test (α=.05). Overall trueness values were 19.1 μm (intraoral scanner group), 23.5 μm (desktop group), and 26.2 μm (PVS group), whereas overall precision values were 11.9 μm (intraoral), 18.0 μm (PVS), and 20.7 μm (desktop). Simple main effects analysis showed that impressions made with the intraoral scanner were significantly more accurate than those of the PVS and desktop groups when the TOC angle was less than 8 degrees (Pgeometry. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry

  6. Bruce Medalists at the Mt. Wilson Observatory

    Science.gov (United States)

    Tenn, J. S.

    2004-12-01

    The institution which succeeded the Mt. Wilson Station of Yerkes Observatory in 1904 has had six names and three sites. From 1948-1980 it was united with Caltech's Palomar Observatory, and since then its main observatory has been in Chile, though still headquartered on Santa Barbara Street in Pasadena. For more than half of the twentieth century it was the leading observatory in the world. One bit of evidence for this is the amazing number of its staff members awarded the Bruce Medal. The Catherine Wolfe Bruce Gold Medal of the Astronomical Society of the Pacific has been awarded for lifetime contributions to astronomy since 1898. It is an international award. It wasn't until 1963 that the number of medalists who had worked primarily in the United States reached half the total. Yet fourteen of the first 87 medalists spent most of their careers at Mt. Wilson, including the period when it was Mt. Wilson and Palomar, and another three were Caltech observers who used the telescopes of the jointly operated observatory. Several more medalists made substantial use of the telescopes on Mt. Wilson and Palomar Mountain. We will discuss highlights of the careers of a number of these distinguished astronomers: directors George Ellery Hale, Walter Adams, Ira Bowen, and Horace Babcock; solar observer and satellite discoverer Seth Nicholson; instrument builder Harold Babcock; galactic and cosmological observers Frederick Seares, Edwin Hubble, Walter Baade, Rudolph Minkowski, and Allan Sandage; and spectroscopists Paul Merrill, Alfred Joy, Olin Wilson, Jesse Greenstein, Maarten Schmidt, and Wallace Sargent. We will touch briefly on others who used Mt. Wilson and/or Palomar, including Harlow Shapley, Joel Stebbins, Charlotte Moore Sitterly, Donald Osterbrock, and Albert Whitford.

  7. Diagnostic accuracy and cost-effectiveness of alternative methods for detection of soil-transmitted helminths in a post-treatment setting in western Kenya.

    Directory of Open Access Journals (Sweden)

    Liya M Assefa

    2014-05-01

    Full Text Available OBJECTIVES: This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. METHODS: Stool samples were collected from children (n = 652 attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. RESULTS: Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14 and cost per case correctly classified ($12.84. Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24. Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. CONCLUSIONS: The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in

  8. Development of fabricating method for optical aspheric surface and fabricating method for high accuracy quadric surface mirror for solar furnace; Kogaku hikyumen no seisakuho no kaihatsu kara taiyoroyo koseido niji kyokumenkyo no sakuseiho made

    Energy Technology Data Exchange (ETDEWEB)

    Shishido, K. [Tohoku Gakuin Univ., Miyagi (Japan). Faculty of Engineering

    1996-03-29

    45 years have passed since the initiation of study on the fabricating method for optical aspheric surface. As examples of making prototypes for the `cam system,` fabrications of ellipsoidal surface mirror, parabolic mirror, segment type parabolic mirror, and particularly special asymmetric aspheric surface lens are quoted as successful cases. As regards the `link` system, a parabolic surface finishing machine for making matrices for soft formation of parabolic mirror segment glass was made on trial basis. Recently, a prototype of finishing machine for making a quadric surface finishing machine with higher accuracy was produced, which enabled preparation of more accurate matrices than in the past in the field of parabolic surface finishing machine, and actual ellipsoidal surface was machined by an ellipsoidal surface finishing machine. As regards solar furnace, study had been made to develop a method of producing a segment type glass mirror comprising a main mirror and a sub-mirror by a comparatively simple method and at high accuracy, and a promising result was obtained. 35 figs.

  9. Using the significant dust deposition event on the glaciers of Mt. Elbrus, Caucasus Mountains, Russia on 5 May 2009 to develop a method for dating and provenancing of desert dust events recorded in snow pack

    Directory of Open Access Journals (Sweden)

    M. Shahgedanova

    2012-09-01

    Full Text Available A significant desert dust deposition event occurred on Mt. Elbrus, Caucasus Mountains, Russia on 5 May 2009, where the deposited dust later appeared as a brown layer in the snow pack. An examination of dust transportation history and analysis of chemical and physical properties of the deposited dust were used to develop a new approach for high-resolution provenancing of dust deposition events recorded in snow pack using multiple independent techniques. A combination of SEVIRI red-green-blue composite imagery, MODIS atmospheric optical depth fields derived using the Deep Blue algorithm, air mass trajectories derived with HYSPLIT model and analysis of meteorological data enabled identification of dust source regions with high temporal (hours and spatial (ca. 100 km resolution. Dust, deposited on 5 May 2009, originated in the foothills of the Djebel Akhdar in eastern Libya where dust sources were activated by the intrusion of cold air from the Mediterranean Sea and Saharan low pressure system and transported to the Caucasus along the eastern Mediterranean coast, Syria and Turkey. Particles with an average diameter below 8 μm accounted for 90% of the measured particles in the sample with a mean of 3.58 μm, median 2.48 μm and the dominant mode of 0.60 μm. The chemical signature of this long-travelled dust was significantly different from the locally-produced dust and close to that of soils collected in a palaeolake in the source region, in concentrations of hematite and oxides of aluminium, manganese, and magnesium. Potential addition of dust from a secondary source in northern Mesopotamia introduced uncertainty in the provenancing of dust from this event. Nevertheless, the approach adopted here enables other dust horizons in the snowpack to be linked to specific dust transport events recorded in remote sensing and meteorological data archives.

  10. Benchmarking Post-Hartree–Fock Methods To Describe the Nonlinear Optical Properties of Polymethines: An Investigation of the Accuracy of Algebraic Diagrammatic Construction (ADC) Approaches

    KAUST Repository

    Knippenberg, Stefan

    2016-10-07

    Third-order nonlinear optical (NLO) properties of polymethine dyes have been widely studied for applications such as all-optical switching. However, the limited accuracy of the current computational methodologies has prevented a comprehensive understanding of the nature of the lowest excited states and their influence on the molecular optical and NLO properties. Here, attention is paid to the lowest excited-state energies and their energetic ratio, as these characteristics impact the figure-of-merit for all-optical switching. For a series of model polymethines, we compare several algebraic diagrammatic construction (ADC) schemes for the polarization propagator with approximate second-order coupled cluster (CC2) theory, the widely used INDO/MRDCI approach and the symmetry-adapted cluster configuration interaction (SAC-CI) algorithm incorporating singles and doubles linked excitation operators (SAC-CI SD-R). We focus in particular on the ground-to-excited state transition dipole moments and the corresponding state dipole moments, since these quantities are found to be of utmost importance for an effective description of the third-order polarizability