WorldWideScience

Sample records for method precisions rsdr

  1. Development and interlaboratory validation of quantitative polymerase chain reaction method for screening analysis of genetically modified soybeans.

    Science.gov (United States)

    Takabatake, Reona; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2013-01-01

    A novel real-time polymerase chain reaction (PCR)-based quantitative screening method was developed for three genetically modified soybeans: RRS, A2704-12, and MON89788. The 35S promoter (P35S) of cauliflower mosaic virus is introduced into RRS and A2704-12 but not MON89788. We then designed a screening method comprised of the combination of the quantification of P35S and the event-specific quantification of MON89788. The conversion factor (Cf) required to convert the amount of a genetically modified organism (GMO) from a copy number ratio to a weight ratio was determined experimentally. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDR), respectively. The determined RSDR values for the method were less than 25% for both targets. We consider that the developed method would be suitable for the simple detection and approximate quantification of GMO.

  2. Development and Evaluation of Event-Specific Quantitative PCR Method for Genetically Modified Soybean MON87701.

    Science.gov (United States)

    Tsukahara, Keita; Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Nishimaki-Mogami, Tomoko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event, MON87701. First, a standard plasmid for MON87701 quantification was constructed. The conversion factor (C f ) required to calculate the amount of genetically modified organism (GMO) was experimentally determined for a real-time PCR instrument. The determined C f for the real-time PCR instrument was 1.24. For the evaluation of the developed method, a blind test was carried out in an inter-laboratory trial. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr), respectively. The determined biases and the RSDr values were less than 30 and 13%, respectively, at all evaluated concentrations. The limit of quantitation of the method was 0.5%, and the developed method would thus be applicable for practical analyses for the detection and quantification of MON87701.

  3. Development and validation of an event-specific quantitative PCR method for genetically modified maize MIR162.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2014-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize event, MIR162. We first prepared a standard plasmid for MIR162 quantification. The conversion factor (Cf) required to calculate the genetically modified organism (GMO) amount was empirically determined for two real-time PCR instruments, the Applied Biosystems 7900HT (ABI7900) and the Applied Biosystems 7500 (ABI7500) for which the determined Cf values were 0.697 and 0.635, respectively. To validate the developed method, a blind test was carried out in an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr). The determined biases were less than 25% and the RSDr values were less than 20% at all evaluated concentrations. These results suggested that the limit of quantitation of the method was 0.5%, and that the developed method would thus be suitable for practical analyses for the detection and quantification of MIR162.

  4. Interlaboratory validation of quantitative duplex real-time PCR method for screening analysis of genetically modified maize.

    Science.gov (United States)

    Takabatake, Reona; Koiwa, Tomohiro; Kasahara, Masaki; Takashima, Kaori; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Oguchi, Taichi; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2011-01-01

    To reduce the cost and time required to routinely perform the genetically modified organism (GMO) test, we developed a duplex quantitative real-time PCR method for a screening analysis simultaneously targeting an event-specific segment for GA21 and Cauliflower Mosaic Virus 35S promoter (P35S) segment [Oguchi et al., J. Food Hyg. Soc. Japan, 50, 117-125 (2009)]. To confirm the validity of the method, an interlaboratory collaborative study was conducted. In the collaborative study, conversion factors (Cfs), which are required to calculate the GMO amount (%), were first determined for two real-time PCR instruments, the ABI PRISM 7900HT and the ABI PRISM 7500. A blind test was then conducted. The limit of quantitation for both GA21 and P35S was estimated to be 0.5% or less. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSD(R)). The determined bias and RSD(R) were each less than 25%. We believe the developed method would be useful for the practical screening analysis of GM maize.

  5. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    Science.gov (United States)

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  6. Development and evaluation of event-specific quantitative PCR method for genetically modified soybean A2704-12.

    Science.gov (United States)

    Takabatake, Reona; Akiyama, Hiroshi; Sakata, Kozue; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Teshima, Reiko; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2011-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event; A2704-12. During the plant transformation, DNA fragments derived from pUC19 plasmid were integrated in A2704-12, and the region was found to be A2704-12 specific. The pUC19-derived DNA sequences were used as primers for the specific detection of A2704-12. We first tried to construct a standard plasmid for A2704-12 quantification using pUC19. However, non-specific signals appeared with both qualitative and quantitative PCR analyses using the specific primers with pUC19 as a template, and we then constructed a plasmid using pBR322. The conversion factor (C(f)), which is required to calculate the amount of the genetically modified organism (GMO), was experimentally determined with two real-time PCR instruments, the Applied Biosystems 7900HT and the Applied Biosystems 7500. The determined C(f) values were both 0.98. The quantitative method was evaluated by means of blind tests in multi-laboratory trials using the two real-time PCR instruments. The limit of quantitation for the method was estimated to be 0.1%. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSD(R)), and the determined bias and RSD(R) values for the method were each less than 20%. These results suggest that the developed method would be suitable for practical analyses for the detection and quantification of A2704-12.

  7. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  8. Validation and Recommendation of Methods to Measure Biogas Production Potential of Animal Manure

    Directory of Open Access Journals (Sweden)

    C. H. Pham

    2013-06-01

    Full Text Available In developing countries, biogas energy production is seen as a technology that can provide clean energy in poor regions and reduce pollution caused by animal manure. Laboratories in these countries have little access to advanced gas measuring equipment, which may limit research aimed at improving local adapted biogas production. They may also be unable to produce valid estimates of an international standard that can be used for articles published in international peer-reviewed science journals. This study tested and validated methods for measuring total biogas and methane (CH4 production using batch fermentation and for characterizing the biomass. The biochemical methane potential (BMP (CH4 NL kg−1 VS of pig manure, cow manure and cellulose determined with the Moller and VDI methods was not significantly different in this test (p>0.05. The biodegradability using a ratio of BMP and theoretical BMP (TBMP was slightly higher using the Hansen method, but differences were not significant. Degradation rate assessed by methane formation rate showed wide variation within the batch method tested. The first-order kinetics constant k for the cumulative methane production curve was highest when two animal manures were fermented using the VDI 4630 method, indicating that this method was able to reach steady conditions in a shorter time, reducing fermentation duration. In precision tests, the repeatability of the relative standard deviation (RSDr for all batch methods was very low (4.8 to 8.1%, while the reproducibility of the relative standard deviation (RSDR varied widely, from 7.3 to 19.8%. In determination of biomethane concentration, the values obtained using the liquid replacement method (LRM were comparable to those obtained using gas chromatography (GC. This indicates that the LRM method could be used to determine biomethane concentration in biogas in laboratories with limited access to GC.

  9. Subdomain Precise Integration Method for Periodic Structures

    Directory of Open Access Journals (Sweden)

    F. Wu

    2014-01-01

    Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

  10. Determination of plant stanols and plant sterols in phytosterol enriched foods with a gas chromatographic-flame ionization detection method: NMKL collaborative study.

    Science.gov (United States)

    Laakso, Päivi H

    2014-01-01

    This collaborative study with nine participating laboratories was conducted to determine the total plant sterol and/or plant stanol contents in phytosterol fortified foods with a gas chromatographic method. Four practice and 12 test samples representing mainly commercially available foodstuffs were analyzed as known replicates. Twelve samples were enriched with phytosterols, whereas four samples contained only natural contents of phytosterols. The analytical procedure consisted of two alternative approaches: hot saponification method, and acid hydrolysis treatment prior to hot saponification. As a result, sterol/stanol compositions and contents in the samples were measured. The amounts of total plant sterols and total plant stanols varying from 0.005 to 8.04 g/100 g product were statistically evaluated after outliers were eliminated. The repeatability RSD (RSDr) varied from 1.34 to 17.13%. The reproducibility RSD (RSDR) ranged from 3.03 to 17.70%, with HorRat values ranging from 0.8 to 2.1. When only phytosterol enriched food test samples are considered, the RSDr ranged from 1.48 to 6.13%, the RSD, ranged from 3.03 to 7.74%, and HorRat values ranged from 0.8 to 2.1. Based on the results of this collaborative study, the study coordinator concludes the method is fit for its purpose.

  11. Analysis of Precision of Activation Analysis Method

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Nørgaard, K.

    1973-01-01

    The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T...

  12. [Development and validation of event-specific quantitative PCR method for genetically modified maize LY038].

    Science.gov (United States)

    Mano, Junichi; Masubuchi, Tomoko; Hatano, Shuko; Futo, Satoshi; Koiwa, Tomohiro; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Takabatake, Reona; Kitta, Kazumi

    2013-01-01

    In this article, we report a novel real-time PCR-based analytical method for quantitation of the GM maize event LY038. We designed LY038-specific and maize endogenous reference DNA-specific PCR amplifications. After confirming the specificity and linearity of the LY038-specific PCR amplification, we determined the conversion factor required to calculate the weight-based content of GM organism (GMO) in a multilaboratory evaluation. Finally, in order to validate the developed method, an interlaboratory collaborative trial according to the internationally harmonized guidelines was performed with blind DNA samples containing LY038 at the mixing levels of 0, 0.5, 1.0, 5.0 and 10.0%. The precision of the method was evaluated as the RSD of reproducibility (RSDR), and the values obtained were all less than 25%. The limit of quantitation of the method was judged to be 0.5% based on the definition of ISO 24276 guideline. The results from the collaborative trial suggested that the developed quantitative method would be suitable for practical testing of LY038 maize.

  13. Reference satellite selection method for GNSS high-precision relative positioning

    Directory of Open Access Journals (Sweden)

    Xiao Gao

    2017-03-01

    Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.

  14. Integrative methods for analyzing big data in precision medicine.

    Science.gov (United States)

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Precision die design by the die expansion method

    CERN Document Server

    Ibhadode, A O Akii

    2009-01-01

    This book presents a new method for the design of the precision dies used in cold-forging, extrusion and drawing processes. The method is based upon die expansion, and attempts to provide a clear-cut theoretical basis for the selection of critical die dimensions for this group of precision dies when the tolerance on product diameter (or thickness) is specified. It also presents a procedure for selecting the minimum-production-cost die from among a set of design alternatives. The mathematical content of the book is relatively simple and will present no difficulty to those who have taken basic c

  16. Solution Method and Precision Analysis of Double-difference Dynamic Precise Orbit Determination of BeiDou Navigation Satellite System

    Directory of Open Access Journals (Sweden)

    LIU Weiping

    2016-02-01

    Full Text Available To resolve the high relativity between the transverse element of GEO orbit and double-difference ambiguity, the classical double-difference dynamic method is improved and the method, which is to determine precise BeiDou satellite orbit using carrier phase and pseudo-range smoothed by phase, is proposed. The feasibility of the method is discussed and the influence of the method about ambiguity fixing is analyzed. Considering the characteristic of BeiDou, the method, which is to fix double-difference ambiguity of BeiDou satellites by QIF, is derived. The real data analysis shows that the new method, which can reduce the relativity and assure the precision, is better than the classical double-difference dynamic method. The result of ambiguity fixing is well by QIF, but the ambiguity fixing success rate is not high on the whole. So the precision of BeiDou orbit can't be improved clearly after ambiguity fixing.

  17. Method of high precision interval measurement in pulse laser ranging system

    Science.gov (United States)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  18. Precision profiles and analytic reliability of radioimmunologic methods

    International Nuclear Information System (INIS)

    Yaneva, Z.; Popova, Yu.

    1991-01-01

    The aim of the present study is to investigate and compare some methods for creation of 'precision profiles' (PP) and to clarify their possibilities for determining the analytical reliability of RIA. Only methods without complicated mathematical calculations has been used. The reproducibility in serums with a concentration of the determinable hormone in the whole range of the calibration curve has been studied. The radioimmunoassay has been performed with TSH-RIA set (ex East Germany), and comparative evaluations - with commercial sets of HOECHST (Germany) and AMERSHAM (GB). Three methods for obtaining the relationship concentration (IU/l) -reproducibility (C.V.,%) are used and a comparison is made of their corresponding profiles: preliminary rough profile, Rodbard-PP and Ekins-PP. It is concluded that the creation of a precision profile is obligatory and the method of its construction does not influence the relationship's course. PP allows to determine concentration range giving stable results which improves the efficiency of the analitical work. 16 refs., 4 figs

  19. Determination of the acid value of instant noodles: interlaboratory study.

    Science.gov (United States)

    Hakoda, Akiko; Sakaida, Kenichi; Suzuki, Tadanao; Yasui, Akemi

    2006-01-01

    An interlaboratory study was performed to evaluate the method for determining the acid value of instant noodles, based on the Japanese Agricultural Standard (JAS), with extraction of lipid using petroleum ether at a volume of 100 mL to the test portion of 25 g. Thirteen laboratories participated and analyzed 5 test samples as blind duplicates. Statistical treatment revealed that the repeatability (RSDr) of acid value was noodles per unit weight, using the equation [acid value = percent free fatty acids (as oleic) x 1.99] and the extracted lipid contents. This method was shown to have acceptable precision by the present study.

  20. Accuracy of a method based on atomic absorption spectrometry to determine inorganic arsenic in food: Outcome of the collaborative trial IMEP-41.

    Science.gov (United States)

    Fiamegkos, I; Cordeiro, F; Robouch, P; Vélez, D; Devesa, V; Raber, G; Sloth, J J; Rasmussen, R R; Llorente-Mirandes, T; Lopez-Sanchez, J F; Rubio, R; Cubadda, F; D'Amato, M; Feldmann, J; Raab, A; Emteborg, H; de la Calle, M B

    2016-12-15

    A collaborative trial was conducted to determine the performance characteristics of an analytical method for the quantification of inorganic arsenic (iAs) in food. The method is based on (i) solubilisation of the protein matrix with concentrated hydrochloric acid to denature proteins and allow the release of all arsenic species into solution, and (ii) subsequent extraction of the inorganic arsenic present in the acid medium using chloroform followed by back-extraction to acidic medium. The final detection and quantification is done by flow injection hydride generation atomic absorption spectrometry (FI-HG-AAS). The seven test items used in this exercise were reference materials covering a broad range of matrices: mussels, cabbage, seaweed (hijiki), fish protein, rice, wheat, mushrooms, with concentrations ranging from 0.074 to 7.55mgkg(-1). The relative standard deviation for repeatability (RSDr) ranged from 4.1 to 10.3%, while the relative standard deviation for reproducibility (RSDR) ranged from 6.1 to 22.8%. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  2. Evaluation of Different Estimation Methods for Accuracy and Precision in Biological Assay Validation.

    Science.gov (United States)

    Yu, Binbing; Yang, Harry

    2017-01-01

    Biological assays ( bioassays ) are procedures to estimate the potency of a substance by studying its effects on living organisms, tissues, and cells. Bioassays are essential tools for gaining insight into biologic systems and processes including, for example, the development of new drugs and monitoring environmental pollutants. Two of the most important parameters of bioassay performance are relative accuracy (bias) and precision. Although general strategies and formulas are provided in USP, a comprehensive understanding of the definitions of bias and precision remain elusive. Additionally, whether there is a beneficial use of data transformation in estimating intermediate precision remains unclear. Finally, there are various statistical estimation methods available that often pose a dilemma for the analyst who must choose the most appropriate method. To address these issues, we provide both a rigorous definition of bias and precision as well as three alternative methods for calculating relative standard deviation (RSD). All methods perform similarly when the RSD ≤10%. However, the USP estimates result in larger bias and root-mean-square error (RMSE) compared to the three proposed methods when the actual variation was large. Therefore, the USP method should not be used for routine analysis. For data with moderate skewness and deviation from normality, the estimates based on the original scale perform well. The original scale method is preferred, and the method based on log-transformation may be used for noticeably skewed data. LAY ABSTRACT: Biological assays, or bioassays, are essential in the development and manufacture of biopharmaceutical products for potency testing and quality monitoring. Two important parameters of assay performance are relative accuracy (bias) and precision. The definitions of bias and precision in USP 〈1033〉 are elusive and confusing. Another complicating issue is whether log-transformation should be used for calculating the

  3. Selection of Suitable DNA Extraction Methods for Genetically Modified Maize 3272, and Development and Evaluation of an Event-Specific Quantitative PCR Method for 3272.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.

  4. Cliché fabrication method using precise roll printing process with 5 um pattern width

    Science.gov (United States)

    Shin, Yejin; Kim, Inyoung; Oh, Dong-Ho; Lee, Taik-Min

    2016-09-01

    Among the printing processes for printed electronic devices, gravure offset and reverse offset method have drawn attention for its fine pattern printing possibility. These printing methods use cliché, which has critical effect on the final product precision and quality. In this research, a novel precise cliché replica method is proposed. It consists of copper sputtering, precise mask pattern printing with 5 um width using reverse offset printing, Ni electroplating, lift-off, etching, and DLC coating. We finally compare the fabricated replica cliché with the original one and print out precise patterns using the replica cliché.

  5. New methods for precision Moeller polarimetry*

    International Nuclear Information System (INIS)

    Gaskell, D.; Meekins, D.G.; Yan, C.

    2007-01-01

    Precision electron beam polarimetry is becoming increasingly important as parity violation experiments attempt to probe the frontiers of the standard model. In the few GeV regime, Moeller polarimetry is well suited to high-precision measurements, however is generally limited to use at relatively low beam currents (<10 μA). We present a novel technique that will enable precision Moeller polarimetry at very large currents, up to 100 μA. (orig.)

  6. Fast and precise method of contingency ranking in modern power system

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2011-01-01

    Contingency Analysis is one of the most important aspect of Power System Security Analysis. This paper presents a fast and precise method of contingency ranking for effective power system security analysis. The method proposed in this research work takes due consideration of both apparent power o...... is based on realistic approach taking practical situations into account. Besides taking real situations into consideration the proposed method is fast enough to be considered for on-line security analysis.......Contingency Analysis is one of the most important aspect of Power System Security Analysis. This paper presents a fast and precise method of contingency ranking for effective power system security analysis. The method proposed in this research work takes due consideration of both apparent power...

  7. Precise charge density studies by maximum entropy method

    CERN Document Server

    Takata, M

    2003-01-01

    For the production research and development of nanomaterials, their structural information is indispensable. Recently, a sophisticated analytical method, which is based on information theory, the Maximum Entropy Method (MEM) using synchrotron radiation powder data, has been successfully applied to determine precise charge densities of metallofullerenes and nanochannel microporous compounds. The results revealed various endohedral natures of metallofullerenes and one-dimensional array formation of adsorbed gas molecules in nanochannel microporous compounds. The concept of MEM analysis was also described briefly. (author)

  8. Precise positioning method for multi-process connecting based on binocular vision

    Science.gov (United States)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  9. Method for obtaining more precise measures of excreted organic carbon

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    A new method for concentrating and measuring excreted organic carbon by lyophilization and scintillation counting is efficient, improves measurable radioactivity, and increases precision for estimates of organic carbon excreted by phytoplankton and macrophytes

  10. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  11. Best, Useful and Objective Precisions for Information Retrieval of Three Search Methods in PubMed and iPubMed

    Directory of Open Access Journals (Sweden)

    Somayyeh Nadi Ravandi

    2016-10-01

    Full Text Available MEDLINE is one of the valuable sources of medical information on the Internet. Among the different open access sites of MEDLINE, PubMed is the best-known site. In 2010, iPubMed was established with an interaction-fuzzy search method for MEDLINE access. In the present work, we aimed to compare the precision of the retrieved sources (Best, Useful and Objective precision in the PubMed and iPubMed using two search methods (simple and MeSH search in PubMed and interaction-fuzzy method in iPubmed. During our semi-empirical study period, we held training workshops for 61 students of higher education to teach them Simple Search, MeSH Search, and Fuzzy-Interaction Search methods. Then, the precision of 305 searches for each method prepared by the students was calculated on the basis of Best precision, Useful precision, and Objective precision formulas. Analyses were done in SPSS version 11.5 using the Friedman and Wilcoxon Test, and three precisions obtained with the three precision formulas were studied for the three search methods. The mean precision of the interaction-fuzzy Search method was higher than that of the simple search and MeSH search for all three types of precision, i.e., Best precision, Useful precision, and Objective precision, and the Simple search method was in the next rank, and their mean precisions were significantly different (P < 0.001. The precision of the interaction-fuzzy search method in iPubmed was investigated for the first time. Also for the first time, three types of precision were evaluated in PubMed and iPubmed. The results showed that the Interaction-Fuzzy search method is more precise than using the natural language search (simple search and MeSH search, and users of this method found papers that were more related to their queries; even though search in Pubmed is useful, it is important that users apply new search methods to obtain the best results.

  12. [Precision nutrition in the era of precision medicine].

    Science.gov (United States)

    Chen, P Z; Wang, H

    2016-12-06

    Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.

  13. A High-precision Motion Compensation Method for SAR Based on Image Intensity Optimization

    Directory of Open Access Journals (Sweden)

    Hu Ke-bin

    2015-02-01

    Full Text Available Owing to the platform instability and precision limitations of motion sensors, motion errors negatively affect the quality of synthetic aperture radar (SAR images. The autofocus Back Projection (BP algorithm based on the optimization of image sharpness compensates for motion errors through phase error estimation. This method can attain relatively good performance, while assuming the same phase error for all pixels, i.e., it ignores the spatial variance of motion errors. To overcome this drawback, a high-precision motion error compensation method is presented in this study. In the proposed method, the Antenna Phase Centers (APC are estimated via optimization using the criterion of maximum image intensity. Then, the estimated APCs are applied for BP imaging. Because the APC estimation equals the range history estimation for each pixel, high-precision phase compensation for every pixel can be achieved. Point-target simulations and processing of experimental data validate the effectiveness of the proposed method.

  14. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  15. A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.

    Science.gov (United States)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang

    2017-06-28

    Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.

  16. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    Science.gov (United States)

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  17. Determination of Ochratoxin A in Black and White Pepper, Nutmeg, Spice Mix, Cocoa, and Drinking Chocolate by High-Performance Liquid Chromatography Coupled with Fluorescence Detection: Collaborative Study.

    Science.gov (United States)

    Cubero-Leon, Elena; Bouten, Katrien; Senyuva, Hamide; Stroka, Joerg

    2017-09-01

    A method validation study for the determination of ochratoxin A in black and white pepper (Piper spp.), nutmeg (Myristica fragrans), spice mix (blend of ginger, turmeric, pepper, nutmeg, and chili), cocoa powder, and drinking chocolate was conducted according to the International Harmonized Protocol of the International Union of Pure and Applied Chemistry. The method is based on the extraction of samples with aqueous methanol, followed by a cleanup of the extract with an immunoaffinity column. The determination is carried out by reversed-phase LC coupled with a fluorescence detector. The study involved 25 participants representing a cross-section of research, private, and official control laboratories from 12 European Union (EU) Member States, together with Turkey and Macedonia. Mean recoveries ranged from 71 to 85% for spices and from 85 to 88% for cocoa and drinking chocolate. The RSDr values ranged from 5.6 to 16.7% for spices and from 4.5 to 18.7% for cocoa and drinking chocolate. The RSDR values ranged from 9.5 to 22.6% for spices and from 13.7 to 30.7% for cocoa and drinking chocolate. The resulting Horwitz ratios ranged from 0.4 to 1 for spices and from 0.6 to 1.4 for cocoa and drinking chocolate according to the Horwitz function modified by Thompson. The method showed acceptable within-laboratory and between-laboratory precision for each matrix, and it conforms to requirements set by current EU legislation.

  18. Accuracy, precision, and economic efficiency for three methods of thrips (Thysanoptera: Thripidae) population density assessment.

    Science.gov (United States)

    Sutherland, Andrew M; Parrella, Michael P

    2011-08-01

    Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.

  19. Precision Distances with the Tip of the Red Giant Branch Method

    Science.gov (United States)

    Beaton, Rachael Lynn; Carnegie-Chicago Hubble Program Team

    2018-01-01

    The Carnegie-Chicago Hubble Program aims to construct a distance ladder that utilizes old stellar populations in the outskirts of galaxies to produce a high precision measurement of the Hubble Constant that is independent of Cepheids. The CCHP uses the tip of the red giant branch (TRGB) method, which is a statistical measurement technique that utilizes the termination of the red giant branch. Two innovations combine to make the TRGB a competitive route to the Hubble Constant (i) the large-scale measurement of trigonometric parallax by the Gaia mission and (ii) the development of both precise and accurate means of determining the TRGB in both nearby (~1 Mpc) and distant (~20 Mpc) galaxies. Here I will summarize our progress in developing these standardized techniques, focusing on both our edge-detection algorithm and our field selection strategy. Using these methods, the CCHP has determined equally precise (~2%) distances to galaxies in the Local Group (< 1 Mpc) and across the Local Volume (< 20 Mpc). The TRGB is, thus, an incredibly powerful and straightforward means to determine distances to galaxies of any Hubble Type and, thus, has enormous potential for putting any number of astrophyiscal phenomena on absolute units.

  20. In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.

    Science.gov (United States)

    Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert

    2016-09-01

    Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of

  1. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    Science.gov (United States)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  2. Advanced methods and algorithm for high precision astronomical imaging

    International Nuclear Information System (INIS)

    Ngole-Mboula, Fred-Maurice

    2016-01-01

    One of the biggest challenges of modern cosmology is to gain a more precise knowledge of the dark energy and the dark matter nature. Fortunately, the dark matter can be traced directly through its gravitational effect on galaxies shapes. The European Spatial Agency Euclid mission will precisely provide data for such a purpose. A critical step is analyzing these data will be to accurately model the instrument Point Spread Function (PSF), which the focus of this thesis.We developed non parametric methods to reliably estimate the PSFs across an instrument field-of-view, based on unresolved stars images and accounting for noise, under sampling and PSFs spatial variability. At the core of these contributions, modern mathematical tools and concepts such as sparsity. An important extension of this work will be to account for the PSFs wavelength dependency. (author) [fr

  3. In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions.

    Science.gov (United States)

    Ender, Andreas; Attin, Thomas; Mehl, Albert

    2016-03-01

    Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; digitized scannable vinylsiloxanether, VSES-D; and irreversible hydrocolloid, ALG) and 7 digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; Lava COS, LAV; Lava True Definition Scanner, T-Def; 3Shape Trios, TRI; and 3Shape Trios Color, TRC) techniques. Impressions were made 3 times each in 5 participants (N=15). The impressions were then compared within and between the test groups. The cast surfaces were measured point-to-point using the signed nearest neighbor method. Precision was calculated from the (90%-10%)/2 percentile value. The precision ranged from 12.3 μm (VSE) to 167.2 μm (ALG), with the highest precision in the VSE and VSES groups. The deviation pattern varied distinctly according to the impression method. Conventional impressions showed the highest accuracy across the complete dental arch in all groups, except for the ALG group. Conventional and digital impression methods differ significantly in the complete-arch accuracy. Digital impression systems had higher local deviations within the complete arch cast; however, they achieve equal and higher precision than some conventional impression materials. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  4. Flavanol and procyanidin content (by degree of polymerization 1-10) of chocolate, cocoa liquors, cocoa powders, and cocoa extracts: first action 2012.24.

    Science.gov (United States)

    Robbins, Rebecca J; Leonczak, Jadwiga; Li, Julia; Johnson, J Christopher; Collins, Tom; Kwik-Uribe, Catherine; Schmitz, Harold H

    2013-01-01

    An international collaborative study was conducted on an HPLC method with fluorescent detection for the determination of flavanols and procyanidins in chocolate and cocoa-containing materials. The sum of the oligomeric fractions with degree of polymerization 1-10 was the determined content value. Sample materials included dark and milk chocolates, cocoa powder, cocoa liquors, and cocoa extracts. The content ranged from approximately 2 to 500 mg/g (defatted basis). Thirteen laboratories--representing commercial, industrial, and academic institutions in six countries--participated in this interlaboratory study. Fourteen samples were sent as blind duplicates to the collaborators. Results for 12 laboratories yielded repeatability RSD (RSDr) values below 10% for all materials analyzed, ranging from 4.17 to 9.61%. Reproducibility RSD (RSDR) values ranged from 5.03 to 12.9% for samples containing 8.07 to 484.7 mg/g material analyzed. In one sample containing a low content of flavanols and procyanidins (approximately 2 mg/g), the RSDR was 17.68%.

  5. Improving the precision of the keyword-matching pornographic text filtering method using a hybrid model.

    Science.gov (United States)

    Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong

    2004-09-01

    With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.

  6. Research on a high-precision calibration method for tunable lasers

    Science.gov (United States)

    Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai

    2018-03-01

    Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.

  7. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  8. Determination of flavanol and procyanidin (by degree of polymerization 1-10) content of chocolate, cocoa liquors, powder(s), and cocoa flavanol extracts by normal phase high-performance liquid chromatography: collaborative study.

    Science.gov (United States)

    Robbins, Rebecca J; Leonczak, Jadwiga; Li, Julia; Johnson, J Christopher; Collins, Tom; Kwik-Uribe, Catherine; Schmitz, Harold H

    2012-01-01

    An international collaborative study was conducted on an HPLC method with fluorescent detection (FLD) for the determination of flavanols and procyanidins in materials containing chocolate and cocoa. The sum of the oligomeric fractions with degree of polymerization 1-10 was the determined content value. Sample materials included dark and milk chocolates, cocoa powder, cocoa liquors, and cocoa extracts. The content ranged from approximately 2 to 500 mg/g (defatted basis). Thirteen laboratories representing commercial, industrial, and academic institutions in six countries participated in the study. Fourteen samples were sent as blind duplicates to the collaborators. Results from 12 laboratories yielded repeatability relative standard deviation (RSDr) values that were below 10% for all materials analyzed, ranging from 4.17 to 9.61%. The reproducibility relative standard deviation (RSD(R)) values ranged from 5.03 to 12.9% for samples containing 8.07 to 484.7 mg/g. In one sample containing a low content of flavanols and procyanidins (approximately 2 mg/g), the RSD(R) was 17.68%. Based on these results, the method is recommended for Official First Action for the determination of flavanols and procyanidins in chocolate, cocoa liquors, powder(s), and cocoa extracts.

  9. A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.

    Science.gov (United States)

    Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin

    2016-05-25

    Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.

  10. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients.

    Science.gov (United States)

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients' body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (Ptemperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient's body temperature in the intensive care units because of high accuracy and acceptable precision.

  11. How precisely can the difference method determine the $\\pi$NN coupling constant?

    CERN Document Server

    Loiseau, B

    2000-01-01

    The Coulomb-like backward peak of the neutron-proton scattering differentialcross section is due to one-pion exchange. Extrapolation to the pion pole ofprecise data should allow to obtain the value of the charged pion-nucleoncoupling constant. This was classically attempted by the use of a smoothphysical function, the Chew function, built from the cross section. To improveaccuracy of such an extrapolation one has introduced a difference method. Itconsists of extrapolating the difference between the Chew function based onexperimental data and that built from a model where the pion-nucleon couplingis exactly known. Here we cross-check to which precision can work this novelextrapolation method by applying it to differences between models and betweendata and models. With good reference models and for the 162 MeV neutron-protonUppsala single energy precise data with a normalisation error of 2.3 , thevalue of the charged pion-nucleon coupling constant is obtained with anaccuracy close to 1.8

  12. A method for consistent precision radiation therapy

    International Nuclear Information System (INIS)

    Leong, J.

    1985-01-01

    Using a meticulous setup procedure in which repeated portal films were taken before each treatment until satisfactory portal verifications were obtained, a high degree of precision in patient positioning was achieved. A fluctuation from treatment to treatment, over 11 treatments, of less than +-0.10 cm (S.D.) for anatomical points inside the treatment field was obtained. This, however, only applies to specific anatomical points selected for this positioning procedure and does not apply to all points within the portal. We have generalized this procedure and have suggested a means by which any target volume can be consistently positioned which may approach this degree of precision. (orig.)

  13. Biotechnological Methods for Precise Diagnosis of Methicillin Resistance in Staphylococci

    Directory of Open Access Journals (Sweden)

    Aija Zilevica

    2005-04-01

    Full Text Available Antimicrobial resistance is one of the most urgent problems in medicine nowadays. The purpose of the study was to investigate the microorganisms resistant to first-line antimicrobials, including gram-positive cocci, particularly the methicillin-resistant Staphylococcus aureus and coagulase-negative Staphylococci, the major agents of nosocomial infections. Owing to the multi-resistance of these agents, precise diagnosis of the methicillin resistance of Staphylococci is of greatest clinical importance. It is not enough to use only conventional microbiological diagnostic methods. Biotechnological methods should be also involved. In our studies, the following methicillin resistance identification methods were used: the disk diffusion method, detection of the mecA gene by PCR, E-test and Slidex MRSA test. For molecular typing, PFGL, RAPD tests and detection of the coa gene were used. All the MRS strains were multiresistant to antibacterials. No vancomycine resistance was registered.

  14. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    OpenAIRE

    Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin

    2008-01-01

    http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...

  15. A High Precision Laser-Based Autofocus Method Using Biased Image Plane for Microscopy

    Directory of Open Access Journals (Sweden)

    Chao-Chen Gu

    2018-01-01

    Full Text Available This study designs and accomplishes a high precision and robust laser-based autofocusing system, in which a biased image plane is applied. In accordance to the designed optics, a cluster-based circle fitting algorithm is proposed to calculate the radius of the detecting spot from the reflected laser beam as an essential factor to obtain the defocus value. The experiment conduct on the experiment device achieved novel performance of high precision and robustness. Furthermore, the low demand of assembly accuracy makes the proposed method a low-cost and realizable solution for autofocusing technique.

  16. Precision evaluation of pressed pastille preparation different methods for X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Lima, Raquel Franco de Souza; Melo Junior, Germano; Sa, Jaziel Martins

    1997-01-01

    This work relates the comparison between the results obtained with the two different methods of preparing pressed pastilles from the crushed sample. In this study, the reproductivity is evaluated, aiming to define the method that furnishes a better analytic precision. These analyses were realized with a X-ray fluorescence spectrometer at the Geology Department of the Federal University of Rio Grande do Norte

  17. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  18. Optimization of an Indirect Enzymatic Method for the Simultaneous Analysis of 3-MCPD, 2-MCPD, and Glycidyl Esters in Edible Oils.

    Science.gov (United States)

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Ikuta, Keiich; Egawa, Yoshitsugu; Kitta, Tadashi; Kido, Hirotsugu; Sano, Takashi; Takahashi, Yukinari; Nezu, Toru; Nohara, Hidenori; Miyashita, Takashi; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2015-01-01

    We developed a novel, indirect enzymatic method for the analysis of fatty acid esters of 3-monochloro-1,2-propanediol (3-MCPD), 2-monochloro-1,3-propanediol (2-MCPD), and glycidol (Gly) in edible oils and fats. Using this method, the ester analytes were rapidly cleavaged by Candida rugosa lipase at room temperature for 0.5 h. As a result of the simultaneous hydrolysis and bromination steps, 3-MCPD esters, 2-MCPD esters, and glycidyl esters were converted to free 3-MCPD, 2-MCPD, and 3-monobromo-1,2-propanediol (3-MBPD), respectively. After the addition of internal standards, the mixtures were washed with hexane, derivatized with phenylboronic acid, and analyzed by gas chromatography-mass spectrometer (GC-MS). The analytical method was evaluated in preliminary and feasibility studies performed by 13 laboratories. The preliminary study from 4 laboratories showed the reproducibility (RSD R ) of 3-MCPD and 2-MCPD in extra virgin olive (EVO) oil, semi-solid palm oil, and solid palm oil. However, the RSDR and recoveries of Gly in the palm oil samples were not satisfactory. The Gly content of refrigerated palm oil samples decreased whereas the samples at room temperature were stable for three months, and this may be due to the depletion of Gly during cold storage. The feasibility studies performed by all 13 laboratories were conducted based on modifications of the shaking conditions for ester cleavage, the conditions of Gly bromination, and the removal of gel formed by residual lipase. Satisfactory RSDR were obtained for EVO oil samples spiked with standard esters (4.4% for 3-MCPD, 11.2% for 2-MCPD, and 6.6% for Gly).

  19. Precise determination of sodium in serum by simulated isotope dilution method of inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Yan Ying; Zhang Chuanbao; Zhao Haijian; Chen Wenxiang; Shen Ziyu; Wang Xiaoru; Chen Dengyun

    2007-01-01

    A new precise and accurate method for the determination of sodium in serum by inductively coupled plasma mass spectrometry (ICP-MS) was developed. Since 23 Na is the single isotope element, 27 Al is selected as simulated isotope of Na. Al is spiked into serum samples and Na standard solution. 23 Na/ 27 Al ratio in the Na standard solution is determined to assume the natural Na isotope ratio. The serums samples are digested by purified HNO 3 /H 2 O 2 and diluted to get about 0.6 μg·g -1 Al solutions, and the 23 Na/ 27 Al ratios of the serum samples are obtained to calculate the accurate Na concentrations basing on the isotope dilution method. When the simulated isotope dilution method of ICP-MS is applied and Al is selected as the simulated isotope of Na, the precise and accurate Na concentrations in the serums are determined. The inter-day precision of CV<0.13% for one same serum sample is obtained during 3 days 4 measurements. The spike recoveries are between 99.69% and 100.60% for 4 different serum samples and 3 days multi-measurements. The results of measuring standard reference materials of serum sodium are agree with the certified value. The relative difference between 3 days is 0.22%-0.65%, and the relative difference in one bottle is 0.15%-0.44%. The ICP-MS and Al simulated isotope dilution method is proved to be not only precise and accurate, but also quick and convenient for measuring Na in serum. It is promising to be a reference method for precise determination of Na in serum. Since Al is a low cost isotope dilution reagent, the method is possible to be widely applied for serum Na determination. (authors)

  20. A high precision method for normalization of cross sections

    International Nuclear Information System (INIS)

    Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.

    1988-08-01

    It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)

  1. Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.

    Science.gov (United States)

    Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro

    2010-01-01

    The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.

  2. Determination of the moisture content of instant noodles: interlaboratory study.

    Science.gov (United States)

    Hakoda, Akiko; Kasama, Hirotaka; Sakaida, Kenichi; Suzuki, Tadanao; Yasui, Akemi

    2006-01-01

    Determination of the moisture content of instant noodles, currently under discussion by the Codex Alimentarius Commission (CAC) requires 2 methods: one for fried noodles and the other for nonfried noodles. The method to determine the moisture content of fried noodles by drying at 105 degrees C for 2 h used in the Japanese Agricultural Standard (JAS) system of Japan can be applied to this purpose. In the present study, the JAS method for fried noodles was modified to be suitable for nonfried noodles by extending the drying time to 4 h. An interlaboratory study was conducted to evaluate interlaboratory performance statistics for these 2 methods. Ten participating laboratories each analyzed 5 test materials of fried and nonfried noodles as blind duplicates. After removal of outliers statistically, the repeatability (RSDr) and the reproducibility (RSD(R)) of these methods were 1.6-2.6 and 3.9-4.8% for fried noodles, and 0.3-1.5 and 1.3-2.9% for nonfried noodles, respectively.

  3. A method of precise profile analysis of diffuse scattering for the KENS pulsed neutrons

    International Nuclear Information System (INIS)

    Todate, Y.; Fukumura, T.; Fukazawa, H.

    2001-01-01

    An outline of our profile analysis method, which is now of practical use for the asymmetric KENS pulsed thermal neutrons, are presented. The analysis of the diffuse scattering from a single crystal of D 2 O is shown as an example. The pulse shape function is based on the Ikeda-Carpenter function adjusted for the KENS neutron pulses. The convoluted intensity is calculated by a Monte-Carlo method and the precision of the calculation is controlled. Fitting parameters in the model cross section can be determined by the built-in nonlinear least square fitting procedure. Because this method is the natural extension of the procedure conventionally used for the triple-axis data, it is easy to apply with generality and versatility. Most importantly, furthermore, this method has capability of precise correction of the time shift of the observed peak position which is inevitably caused in the case of highly asymmetric pulses and broad scattering function. It will be pointed out that the accurate determination of true time-of-flight is important especially in the single crystal inelastic experiments. (author)

  4. Improvement in precision and trueness of quantitative XRF analysis with glass-bead method. 1

    International Nuclear Information System (INIS)

    Yamamoto, Yasuyuki; Ogasawara, Noriko; Yuhara, Yoshitaroh; Yokoyama, Yuichi

    1995-01-01

    The factors which lower the precisions of simultaneous X-ray Fluorescence (XRF) spectrometer were investigated. Especially in quantitative analyses of oxide powders with glass-bead method, X-ray optical characteristics of the equipment affects the precision of the X-ray intensities. In focused (curved) crystal spectrometers, the precision depends on the deviation of the actual size and position of the crystals from those of theoretical designs, thus the precision differs for each crystal for each element. When the deviation is large, a dispersion of the measured X-ray intensities is larger than the statistical dispersion, even though the intensity itself keeps unchanged. Moreover, a waviness of the surface of glass-beads makes the difference of the height of an analyzed surface from that of the designed one. This difference makes the change of the amount of the X-ray incident on the analyzing crystal and makes the dispersion of the X-ray intensity larger. Considering these factors, a level of the waviness must be regulated to improve the precision under exsisting XRF equipments. In this study, measurement precisions of 4 simultaneous XRF spectrometers were evaluated, and the element lead (Pb-Lβ1) was found to have the lowest precision. Relative standard deviation (RSD) of the measurements of 10 glass-beads for the same powder sample was 0.3% without the regulation of the waviness of analytical surface. With mechanical flattening of the glass-bead surface, the level of waviness, which is the maximum difference of the heights in a glass-bead, was regulated as under 30 μm, RSD was 0.038%, which is almost comparable to the statistical RSD 0.033%. (author)

  5. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    Directory of Open Access Journals (Sweden)

    Asadian S

    2016-09-01

    Full Text Available Simin Asadian,1 Alireza Khatony,1 Gholamreza Moradi,2 Alireza Abdi,1 Mansour Rezaei,3 1Nursing and Midwifery School, Kermanshah University of Medical Sciences, 2Department of Anesthesiology, 3Biostatistics & Epidemiology Department, Kermanshah University of Medical Sciences, Kermanshah, Iran Introduction: An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods: In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results: There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001. Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%. Paired t-test demonstrated an acceptable precision with forehead (P=0.132, left (P=0.18 and right (P=0.318 tympanic membranes, oral (P=1.00, and axillary (P=1.00 methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion: The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left for

  6. Precision of a new bedside method for estimation of the circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Eriksen, B; Henneberg, S W

    1993-01-01

    The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before and a......, determination of CBV can be performed with an amount of CO that gives rise to a harmless increase in the carboxyhemoglobin concentration.(ABSTRACT TRUNCATED AT 250 WORDS)...

  7. Determination of total carbohydrates in wine and wine-like beverages by HPLC with a refractive index detector: First Action 2013.12.

    Science.gov (United States)

    Kupina, Steve; Roman, Mark

    2014-01-01

    An international collaborative study was conducted of an HPLC-refractive index (RI) detector method for the determination of the combined amounts of sugars, glycerol, organic acids, and phenolic compounds in wines and wine-like beverages. Nine collaborating laboratories representing major winery, contract laboratories, and government laboratories tested eight different materials as blind duplicates using the proposed method. Sample materials included red and white wines, port, wine cooler, and nonalcoholic wine. One material was a negative control, and one material was a reference material. Samples were either treated with an ion-exchange resin to remove interfering organic acids prior to analysis or left untreated to include organic acids and phenolics. Red wine samples were treated with polyvinylpolypyrrolidone to remove potential interferences from phenolics prior to analysis. The HPLC analyses were performed on a Bio-Rad Fast Acid Analysis Column using RI detection. Reproducibility (RSD(R)) for untreated samples (sugars + phenolics + organic acids) ranged from 6.6% for Titrivin AA4 reference material to 11.0% for dry red wine. RSD(R) for treated samples (sugars only) ranged from 6.8% for white zinfandel to 18.9% for dry white wine. RSD(R) for treated samples (sugars only) + glycerol ranged from 6.4% for white zinfandel to 19.8% for dry red wine. Based on these results, the method was adopted as Official First Action status for determination of total carbohydrates in wine and wine-like beverages.

  8. A high precision method for quantitative measurements of reactive oxygen species in frozen biopsies.

    Directory of Open Access Journals (Sweden)

    Kirsti Berg

    Full Text Available OBJECTIVE: An electron paramagnetic resonance (EPR technique using the spin probe cyclic hydroxylamine 1-hydroxy-3-methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine (CMH was introduced as a versatile method for high precision quantification of reactive oxygen species, including the superoxide radical in frozen biological samples such as cell suspensions, blood or biopsies. MATERIALS AND METHODS: Loss of measurement precision and accuracy due to variations in sample size and shape were minimized by assembling the sample in a well-defined volume. Measurement was carried out at low temperature (150 K using a nitrogen flow Dewar. The signal intensity was measured from the EPR 1st derivative amplitude, and related to a sample, 3-carboxy-proxyl (CP• with known spin concentration. RESULTS: The absolute spin concentration could be quantified with a precision and accuracy better than ±10 µM (k = 1. The spin concentration of samples stored at -80°C could be reproduced after 6 months of storage well within the same error estimate. CONCLUSION: The absolute spin concentration in wet biological samples such as biopsies, water solutions and cell cultures could be quantified with higher precision and accuracy than normally achievable using common techniques such as flat cells, tissue cells and various capillary tubes. In addition; biological samples could be collected and stored for future incubation with spin probe, and also further stored up to at least six months before EPR analysis, without loss of signal intensity. This opens for the possibility to store and transport incubated biological samples with known accuracy of the spin concentration over time.

  9. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.

    2011-01-01

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  10. Using cold deformation methods in flow-production of steel high precision shaped sections

    International Nuclear Information System (INIS)

    Zajtsev, M.L.; Makhnev, I.F.; Shkurko, I.I.

    1975-01-01

    A final size with a preset tolerance and a required surface finish of steel high-precision sections could be achieved by a cold deformation of hot-rolled ingots-by drawing through dismountable, monolith or roller-type drawing tools or by cold rolling in roller dies. The particularities of the both techniques are compared as regards a number of complicated shaped sections and the advantages of cold rolling are showna more uniform distribution of deformations (strain hardening) across the section, that is a greater margin of plasticity with the same reductions, the less number of the operations required. Rolling is recommended in all the cases when possible as regards the section shape and the bulk volume. The rolling-mill for the calibration of high-precision sections should have no less than two shafts (so that the size could be controlled in both directions) and arrangements to withstand high axial stresses on the rollers (the stresses appearing during rolling in skew dies). When manufacturing precise shaped sections by the cold rolling method the operations are less plentiful than in the cold drawing manufacturing

  11. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    Science.gov (United States)

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of

  12. A New Method of High-Precision Positioning for an Indoor Pseudolite without Using the Known Point Initialization.

    Science.gov (United States)

    Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe

    2018-06-20

    Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter

  13. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    Directory of Open Access Journals (Sweden)

    Emma Wells

    Full Text Available To prevent transmission in Ebola Virus Disease (EVD outbreaks, it is recommended to disinfect living things (hands and people with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH, sodium dichloroisocyanurate (NaDCC, and sodium hypochlorite (NaOCl have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1 determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2 conducting volunteer testing to assess ease-of-use; and, 3 determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method, then DPD dilution methods (2.4-19% error, then test strips (5.2-48% error; precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources, and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed. Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration

  14. A highly precise frequency-based method for estimating the tension of an inclined cable with unknown boundary conditions

    Science.gov (United States)

    Ma, Lin

    2017-11-01

    This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.

  15. Soil chemical sensor and precision agricultural chemical delivery system and method

    Science.gov (United States)

    Colburn, Jr., John W.

    1991-01-01

    A real time soil chemical sensor and precision agricultural chemical delivery system includes a plurality of ground-engaging tools in association with individual soil sensors which measure soil chemical levels. The system includes the addition of a solvent which rapidly saturates the soil/tool interface to form a conductive solution of chemicals leached from the soil. A multivalent electrode, positioned within a multivalent frame of the ground-engaging tool, applies a voltage or impresses a current between the electrode and the tool frame. A real-time soil chemical sensor and controller senses the electrochemical reaction resulting from the application of the voltage or current to the leachate, measures it by resistivity methods, and compares it against pre-set resistivity levels for substances leached by the solvent. Still greater precision is obtained by calibrating for the secondary current impressed through solvent-less soil. The appropriate concentration is then found and the servo-controlled delivery system applies the appropriate amount of fertilizer or agricultural chemicals substantially in the location from which the soil measurement was taken.

  16. Sternal instability measured with radiostereometric analysis. A study of method feasibility, accuracy and precision.

    Science.gov (United States)

    Vestergaard, Rikke Falsig; Søballe, Kjeld; Hasenkam, John Michael; Stilling, Maiken

    2018-05-18

    A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. Four bone analogs (phantoms) were sternotomized and tantalum beads were inserted in each half. The models were reunited with wire cerclage and placed in a radiolucent separation device. Stereoradiographs (n = 48) of the phantoms in 3 positions were recorded at 4 imposed separation points. The accuracy and precision was compared statistically and presented as translations along the 3 orthogonal axes. 7 sternotomized patients were evaluated for clinical RSA precision by double-examination stereoradiographs (n = 28). In the phantom study, we found no systematic error (p > 0.3) between the three phantom positions, and precision for evaluation of sternal separation was 0.02 mm. Phantom accuracy was mean 0.13 mm (SD 0.25). In the clinical study, we found a detection limit of 0.42 mm for sternal separation and of 2 mm for anterior-posterior dislocation of the sternal halves for the individual patient. RSA is a precise and low-dose image modality feasible for clinical evaluation of sternal stability in research. ClinicalTrials.gov Identifier: NCT02738437 , retrospectively registered.

  17. Precision and Accuracy of k0-NAA Method for Analysis of Multi Elements in Reference Samples

    International Nuclear Information System (INIS)

    Sri-Wardani

    2004-01-01

    Accuracy and precision of k 0 -NAA method could determine in the analysis of multi elements contained in reference samples. The analyzed results of multi elements in SRM 1633b sample were obtained with optimum results in bias of 20% but it is in a good accuracy and precision. The analyzed results of As, Cd and Zn in CCQM-P29 rice flour sample were obtained with very good result in bias of 0.5 - 5.6%. (author)

  18. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.

    Science.gov (United States)

    Ender, Andreas; Mehl, Albert

    2013-02-01

    A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (PDigital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (Pdigital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  19. Determination of Ethanol in Kombucha Products: Single-Laboratory Validation, First Action 2016.12.

    Science.gov (United States)

    Ebersole, Blake; Liu, Ying; Schmidt, Rich; Eckert, Matt; Brown, Paula N

    2017-05-01

    Kombucha is a fermented nonalcoholic beverage that has drawn government attention due to the possible presence of excess ethanol (≥0.5% alcohol by volume; ABV). A validated method that provides better precision and accuracy for measuring ethanol levels in kombucha is urgently needed by the kombucha industry. The current study validated a method for determining ethanol content in commercial kombucha products. The ethanol content in kombucha was measured using headspace GC with flame ionization detection. An ethanol standard curve ranging from 0.05 to 5.09% ABV was used, with correlation coefficients greater than 99.9%. The method detection limit was 0.003% ABV and the LOQ was 0.01% ABV. The RSDr ranged from 1.62 to 2.21% and the Horwitz ratio ranged from 0.4 to 0.6. The average accuracy of the method was 98.2%. This method was validated following the guidelines for single-laboratory validation by AOAC INTERNATIONAL and meets the requirements set by AOAC SMPR 2016.001, "Standard Method Performance Requirements for Determination of Ethanol in Kombucha."

  20. High-precision terahertz frequency modulated continuous wave imaging method using continuous wavelet transform

    Science.gov (United States)

    Zhou, Yu; Wang, Tianyi; Dai, Bing; Li, Wenjun; Wang, Wei; You, Chengwu; Wang, Kejia; Liu, Jinsong; Wang, Shenglie; Yang, Zhengang

    2018-02-01

    Inspired by the extensive application of terahertz (THz) imaging technologies in the field of aerospace, we exploit a THz frequency modulated continuous-wave imaging method with continuous wavelet transform (CWT) algorithm to detect a multilayer heat shield made of special materials. This method uses the frequency modulation continuous-wave system to catch the reflected THz signal and then process the image data by the CWT with different basis functions. By calculating the sizes of the defects area in the final images and then comparing the results with real samples, a practical high-precision THz imaging method is demonstrated. Our method can be an effective tool for the THz nondestructive testing of composites, drugs, and some cultural heritages.

  1. Precision digital control systems

    Science.gov (United States)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  2. Development of precise analytical methods for strontium and lanthanide isotopic ratios using multiple collector inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Ohno, Takeshi; Takaku, Yuichi; Hisamatsu, Shun'ichi

    2007-01-01

    We have developed precise analytical methods for strontium and lanthanide isotopic ratios using multiple collector-ICP-mass spectrometry (MC-ICP-MS) for experimental and environmental studies of their behavior. In order to obtain precise isotopic data using MC-ICP-MS, the mass discrimination effect was corrected by an exponential law correction method. The resulting isotopic data demonstrated that highly precise isotopic analyses (better than 0.1 per mille as 2SD) could be achieved. We also adopted a de-solvating nebulizer system to improve the sensitivity. This system could minimize the water load into the plasma and provided about five times larger intensity of analyte than a conventional nebulizer system did. (author)

  3. Comparison of ATLAS tilecal module No. 8 high-precision metrology measurement results obtained by laser (JINR) and photogrammetric (CERN) methods

    International Nuclear Information System (INIS)

    Batusov, V.; Budagov, Yu.; Gayde, J.C.

    2002-01-01

    The high-precision assembly of large experimental set-ups is of a principal necessity for the successful execution of the forthcoming LHC research programme in the TeV-beams. The creation of an adequate survey and control metrology method is an essential part of the detector construction scenario. This work contains the dimension measurement data for ATLAS hadron calorimeter MODULE No. 8 (6 m, 22 tons) which were obtained by laser and by photogrammetry methods. The comparative data analysis demonstrates the measurements agreement within ± 70 μm. It means, these two clearly independent methods can be combined and lead to the rise of a new-generation engineering culture: high-precision metrology when precision assembling of large scale massive objects

  4. Comparison of ATLAS Tilecal MODULE No 8 high-precision metrology measurement results obtained by laser (JINR) and photogrammetric (CERN) methods

    CERN Document Server

    Batusov, V; Gayde, J C; Khubua, J I; Lasseur, C; Lyablin, M V; Miralles-Verge, L; Nessi, Marzio; Rusakovitch, N A; Sissakian, A N; Topilin, N D

    2002-01-01

    The high-precision assembly of large experimental set-ups is of a principal necessity for the successful execution of the forthcoming LHC research programme in the TeV-beams. The creation of an adequate survey and control metrology method is an essential part of the detector construction scenario. This work contains the dimension measurement data for ATLAS hadron calorimeter MODULE No. 8 (6 m, 22 tons) which were obtained by laser and by photogrammetry methods. The comparative data analysis demonstrates the measurements agreement within +or-70 mu m. It means, these two clearly independent methods can be combined and lead to the rise of a new-generation engineering culture: high-precision metrology when precision assembling of large scale massive objects. (3 refs).

  5. Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework.

    Science.gov (United States)

    Oakden-Rayner, Luke; Carneiro, Gustavo; Bessen, Taryn; Nascimento, Jacinto C; Bradley, Andrew P; Palmer, Lyle J

    2017-05-10

    Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an individual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with individual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall individual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous 'manual' clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research - mortality, and that deep learning with convolutional neural networks can be usefully applied to radiomics research. Computer image analysis applied to routinely collected medical images offers substantial potential to enhance precision medicine initiatives.

  6. Role of endocortical contouring methods on precision of HR-pQCT-derived cortical micro-architecture in postmenopausal women and young adults.

    Science.gov (United States)

    Kawalilak, C E; Johnston, J D; Cooper, D M L; Olszynski, W P; Kontulainen, S A

    2016-02-01

    Precision errors of cortical bone micro-architecture from high-resolution peripheral quantitative computed tomography (pQCT) ranged from 1 to 16 % and did not differ between automatic or manually modified endocortical contour methods in postmenopausal women or young adults. In postmenopausal women, manually modified contours led to generally higher cortical bone properties when compared to the automated method. First, the objective of the study was to define in vivo precision errors (coefficient of variation root mean square (CV%RMS)) and least significant change (LSC) for cortical bone micro-architecture using two endocortical contouring methods: automatic (AUTO) and manually modified (MOD) in two groups (postmenopausal women and young adults) from high-resolution pQCT (HR-pQCT) scans. Second, it was to compare precision errors and bone outcomes obtained with both methods within and between groups. Using HR-pQCT, we scanned twice the distal radius and tibia of 34 postmenopausal women (mean age ± SD 74 ± 7 years) and 30 young adults (27 ± 9 years). Cortical micro-architecture was determined using AUTO and MOD contour methods. CV%RMS and LSC were calculated. Repeated measures and multivariate ANOVA were used to compare mean CV% and bone outcomes between the methods within and between the groups. Significance was accepted at P young adults, postmenopausal women had better precision for radial cortical porosity (precision difference 9.3 %) and pore volume (7.5 %) with MOD. Young adults had better precision for cortical thickness (0.8 %, MOD) and tibial cortical density (0.2 %, AUTO). In postmenopausal women, MOD resulted in 0.2-54 % higher values for most cortical outcomes, as well as 6-8 % lower radial and tibial cortical BMD and 2 % lower tibial cortical thickness. Results suggest that AUTO and MOD endocortical contour methods provide comparable repeatability. In postmenopausal women, manual modification of endocortical contours led to

  7. Improvement of precision method of spectrophotometry with inner standardization and its use in plutonium solutions analysis

    International Nuclear Information System (INIS)

    Stepanov, A.V.; Stepanov, D.A.; Nikitina, S.A.; Gogoleva, T.D.; Grigor'eva, M.G.; Bulyanitsa, L.S.; Panteleev, Yu.A.; Pevtsova, E.V.; Domkin, V.D.; Pen'kin, M.V.

    2006-01-01

    Precision method of spectrophotometry with inner standardization is used for analysis of pure Pu solutions. Improvement of the spectrophotometer and spectrophotometric method of analysis is done to decrease accidental constituent of relative error of the method. Influence of U, Np impurities and corrosion products on systematic constituent of error of the method, and effect of fluoride-ion on completeness of Pu oxidation in sample preparation are studied [ru

  8. Mixed-Precision Spectral Deferred Correction: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  9. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  10. In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions

    OpenAIRE

    Ender, Andreas; Attin, Thomas; Mehl, Albert

    2016-01-01

    STATEMENT OF PROBLEM: Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. PURPOSE: The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. MATERIAL AND METHODS: Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsilox...

  11. Digital Integration Method (DIM): A new method for the precise correlation of OCT and fluorescein angiography

    International Nuclear Information System (INIS)

    Hassenstein, A.; Richard, G.; Inhoffen, W.; Scholz, F.

    2007-01-01

    The new integration method (DIM) provides for the first time the anatomically precise integration of the OCT-scan position into the angiogram (fluorescein angiography, FLA), using reference marker at corresponding vessel crossings. Therefore an exact correlation of angiographic and morphological pathological findings is possible und leads to a better understanding of OCT and FLA. Occult findings in FLA were the patient group which profited most. Occult leakages could gain additional information using DIM such as serous detachment of the retinal pigment epithelium (RPE) in a topography. So far it was unclear whether the same localization in the lesion was examined by FLA and OCT especially when different staff were performing and interpreting the examination. Using DIM this problem could be solved using objective markers. This technique is the requirement for follow-up examinations by OCT. Using DIM for an objective, reliable and precise correlation of OCT and FLA-findings it is now possible to provide the identical scan-position in follow-up. Therefore for follow-up in clinical studies it is mandatory to use DIM to improve the evidence-based statement of OCT and the quality of the study. (author) [de

  12. A modified precise integration method based on Magnus expansion for transient response analysis of time varying dynamical structure

    International Nuclear Information System (INIS)

    Yue, Cong; Ren, Xingmin; Yang, Yongfeng; Deng, Wangqun

    2016-01-01

    This paper provides a precise and efficacious methodology for manifesting forced vibration response with respect to the time-variant linear rotational structure subjected to unbalanced excitation. A modified algorithm based on time step precise integration method and Magnus expansion is developed for instantaneous dynamic problems. The iterative solution is achieved by the ideology of transition and dimensional increment matrix. Numerical examples on a typical accelerating rotation system considering gyroscopic moment and mass unbalance force comparatively demonstrate the validity, effectiveness and accuracy with Newmark-β method. It is shown that the proposed algorithm has high accuracy without loss efficiency.

  13. Novel Methods to Enhance Precision and Reliability in Muscle Synergy Identification during Walking

    Science.gov (United States)

    Kim, Yushin; Bulea, Thomas C.; Damiano, Diane L.

    2016-01-01

    Muscle synergies are hypothesized to reflect modular control of muscle groups via descending commands sent through multiple neural pathways. Recently, the number of synergies has been reported as a functionally relevant indicator of motor control complexity in individuals with neurological movement disorders. Yet the number of synergies extracted during a given activity, e.g., gait, varies within and across studies, even for unimpaired individuals. With no standardized methods for precise determination, this variability remains unexplained making comparisons across studies and cohorts difficult. Here, we utilize k-means clustering and intra-class and between-level correlation coefficients to precisely discriminate reliable from unreliable synergies. Electromyography (EMG) was recorded bilaterally from eight leg muscles during treadmill walking at self-selected speed. Muscle synergies were extracted from 20 consecutive gait cycles using non-negative matrix factorization. We demonstrate that the number of synergies is highly dependent on the threshold when using the variance accounted for by reconstructed EMG. Beyond use of threshold, our method utilized a quantitative metric to reliably identify four or five synergies underpinning walking in unimpaired adults and revealed synergies having poor reproducibility that should not be considered as true synergies. We show that robust and unreliable synergies emerge similarly, emphasizing the need for careful analysis in those with pathology. PMID:27695403

  14. FROM PERSONALIZED TO PRECISION MEDICINE

    Directory of Open Access Journals (Sweden)

    K. V. Raskina

    2017-01-01

    Full Text Available The need to maintain a high quality of life against a backdrop of its inevitably increasing duration is one of the main problems of modern health care. The concept of "right drug to the right patient at the right time", which at first was bearing the name "personalized", is currently unanimously approved by international scientific community as "precision medicine". Precision medicine takes all the individual characteristics into account: genes diversity, environment, lifestyles, and even bacterial microflora and also involves the use of the latest technological developments, which serves to ensure that each patient gets assistance fitting his state best. In the United States, Canada and France national precision medicine programs have already been submitted and implemented. The aim of this review is to describe the dynamic integration of precision medicine methods into routine medical practice and life of modern society. The new paradigm prospects description are complemented by figures, proving the already achieved success in the application of precise methods for example, the targeted therapy of cancer. All in all, the presence of real-life examples, proving the regularity of transition to a new paradigm, and a wide range  of technical and diagnostic capabilities available and constantly evolving make the all-round transition to precision medicine almost inevitable.

  15. System and method for high precision isotope ratio destructive analysis

    Science.gov (United States)

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  16. In vivo precision of conventional and digital methods for obtaining quadrant dental impressions

    OpenAIRE

    Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert

    2016-01-01

    OBJECTIVES Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. MATERIALS AND METHODS Impressions were obtained via two conventional (metal full-arch tray, CI, ...

  17. Comprehensive Methods for Determining Space Effects on Air Force Systems

    Science.gov (United States)

    2009-08-04

    12 5.1.4. DMSP 3 and SWX Web Survey Plot Files (SSIES, SSJ & SSM) 12 5.1.5. Raw Sensor Data Record Files (RSDR) 13 5.2. DMSP J4/J5 Data...product about 50% of the time. On the other hand, many negative ions were less reactive, particularly, SF5-, SF6~, C03~, and NCV . However, F~, CT...processing is done on a daily basis with a crontab job initiating the shell script at a specified time in the morning. 5.1.4. DMSP3 and SWX Web Survey Plot

  18. A task specific uncertainty analysis method for least-squares-based form characterization of ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Ren, M J; Cheung, C F; Kong, L B

    2012-01-01

    In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement

  19. Non-precision approach in manual mode

    Directory of Open Access Journals (Sweden)

    М. В. Коршунов

    2013-07-01

    Full Text Available Considered is the method of non-precision approach of an aircraft in the manual mode with a constant angle of path. Advantage of this method consists in the fact that the construction of approach with a constant angle of path provides the stable path of flight. It is also considered a detailed analysis of the possibility of the approach by the above-mentioned method. Conclusions contain recommendations regarding the use of the described method of non-precision approach during training flights.

  20. Precision genome editing

    DEFF Research Database (Denmark)

    Steentoft, Catharina; Bennett, Eric P; Schjoldager, Katrine Ter-Borch Gram

    2014-01-01

    Precise and stable gene editing in mammalian cell lines has until recently been hampered by the lack of efficient targeting methods. While different gene silencing strategies have had tremendous impact on many biological fields, they have generally not been applied with wide success in the field...... of glycobiology, primarily due to their low efficiencies, with resultant failure to impose substantial phenotypic consequences upon the final glycosylation products. Here, we review novel nuclease-based precision genome editing techniques enabling efficient and stable gene editing, including gene disruption...... by introducing single or double-stranded breaks at a defined genomic sequence. We here compare and contrast the different techniques and summarize their current applications, highlighting cases from the field of glycobiology as well as pointing to future opportunities. The emerging potential of precision gene...

  1. Methods for semi-automated indexing for high precision information retrieval

    Science.gov (United States)

    Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.

    2002-01-01

    OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.

  2. Precision of a new bedside method for estimation of the circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Eriksen, B; Henneberg, S W

    1993-01-01

    The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before...... ventilation with the CO gas mixture. The amount of CO administered during each determination of CBV resulted in an increase in the CO saturation of hemoglobin of 2.1%-3.9%. A theoretical noise propagation analysis was performed by means of the Monte Carlo method. The analysis showed that a CO dose...... patients. The coefficients of variation were 6.2% and 4.7% in healthy and diseased subjects, respectively. Furthermore, the day-to-day variation of the method with respect to the total amount of circulating hemoglobin (nHb) and CBV was determined from duplicate estimates separated by 24-48 h. In conclusion...

  3. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  4. [Accuracy, precision and speed of parenteral nutrition admixture bags manufacturing: comparison between automated and manual methods].

    Science.gov (United States)

    Zegbeh, H; Pirot, F; Quessada, T; Durand, T; Vételé, F; Rose, A; Bréant, V; Aulagner, G

    2011-01-01

    The parenteral nutrition admixture (PNA) manufacturing in hospital pharmacy is realized by aseptic transfer (AT) or sterilizing filtration (SF). The development of filling systems for PNA manufacturing requires, without standard, an evaluation comparing to traditional methods of SF. The filling accuracy of automated AT and SF was evaluated by mass and physical-chemistry tests in repeatability conditions (identical composition of PNA; n=five bags) and reproducibility conditions (different composition of PNA; n=57 bags). For each manufacturing method, the filling precision and the average time for PNA bags manufacturing were evaluated starting from an identical composition and volume PNA (n=five trials). Both manufacturing methods did not show significant difference of accuracy. Precision of both methods was lower than limits generally admitted for acceptability of mass and physical-chemistry tests. However, the manufacturing time for SF was superior (five different binary admixtures in five bags) or inferior (one identical binary admixture in five bags) to time recorded for automated AT. We show that serial manufacturing of PNA bags by SF with identical composition is faster than automated AT. Nevertheless, automated AT is faster than SF in variable composition of PNA. The manufacturing method choice will be motivate by the nature (i. e., variable composition or not) of the manufactured bags. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  5. Setup for precise measurement of neutro lifetime by UCN storage method with inelastically scattered neutron detection

    International Nuclear Information System (INIS)

    Arzumanov, S.S; Bondarenko, L.N.; Gel'tenbort, P.; Morozov, V.I.; Nesvizhevskij, V.V.; Panin, Yu.N.; Strepetov, A.N.

    2007-01-01

    The experimental setup and the method of measuring the neutron lifetime with a precision less then 1 s is described. The measurements will be carried out by storage of ultracold neutrons (UCN) into vessels with inner walls coated with fluorine polymer oil with simultaneous registration of inelastically scattered UCN leaving storage vessels. The analysis of statistical and methodical errors is carried out. The calculated estimation of the measurement accuracy is presented [ru

  6. Precision manufacturing

    CERN Document Server

    Dornfeld, David

    2008-01-01

    Today there is a high demand for high-precision products. The manufacturing processes are now highly sophisticated and derive from a specialized genre called precision engineering. Precision Manufacturing provides an introduction to precision engineering and manufacturing with an emphasis on the design and performance of precision machines and machine tools, metrology, tooling elements, machine structures, sources of error, precision machining processes and precision process planning. As well as discussing the critical role precision machine design for manufacturing has had in technological developments over the last few hundred years. In addition, the influence of sustainable manufacturing requirements in precision processes is introduced. Drawing upon years of practical experience and using numerous examples and illustrative applications, David Dornfeld and Dae-Eun Lee cover precision manufacturing as it applies to: The importance of measurement and metrology in the context of Precision Manufacturing. Th...

  7. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    Science.gov (United States)

    Du, Q.; Xie, D.; Sun, Y.

    2015-06-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other one is Position and Orientation System (POS) data supported aerotriangulation. The high precision registration points are selected as Ground Control Points (GCPs) instead of measuring GCPs manually during aerotriangulation. The registration experiments indicate that the method which registering aerial images and LiDAR points has a great advantage in higher automation and precision compare with manual registration.

  8. Research for developing precise tsunami evaluation methods. Probabilistic tsunami hazard analysis/numerical simulation method with dispersion and wave breaking

    International Nuclear Information System (INIS)

    2007-01-01

    The present report introduces main results of investigations on precise tsunami evaluation methods, which were carried out from the viewpoint of safety evaluation for nuclear power facilities and deliberated by the Tsunami Evaluation Subcommittee. A framework for the probabilistic tsunami hazard analysis (PTHA) based on logic tree is proposed and calculation on the Pacific side of northeastern Japan is performed as a case study. Tsunami motions with dispersion and wave breaking were investigated both experimentally and numerically. The numerical simulation method is verified for its practicability by applying to a historical tsunami. Tsunami force is also investigated and formulae of tsunami pressure acting on breakwaters and on building due to inundating tsunami are proposed. (author)

  9. Simultaneous Detection of Genetically Modified Organisms in a Mixture by Multiplex PCR-Chip Capillary Electrophoresis.

    Science.gov (United States)

    Patwardhan, Supriya; Dasari, Srikanth; Bhagavatula, Krishna; Mueller, Steffen; Deepak, Saligrama Adavigowda; Ghosh, Sudip; Basak, Sanjay

    2015-01-01

    An efficient PCR-based method to trace genetically modified food and feed products is in demand due to regulatory requirements and contaminant issues in India. However, post-PCR detection with conventional methods has limited sensitivity in amplicon separation that is crucial in multiplexing. The study aimed to develop a sensitive post-PCR detection method by using PCR-chip capillary electrophoresis (PCR-CCE) to detect and identify specific genetically modified organisms in their genomic DNA mixture by targeting event-specific nucleotide sequences. Using the PCR-CCE approach, novel multiplex methods were developed to detect MON531 cotton, EH 92-527-1 potato, Bt176 maize, GT73 canola, or GA21 maize simultaneously when their genomic DNAs in mixtures were amplified using their primer mixture. The repeatability RSD (RSDr) of the peak migration time was 0.06 and 3.88% for the MON531 and Bt176, respectively. The RSD (RSDR) of the Cry1Ac peak ranged from 0.12 to 0.40% in multiplex methods. The method was sensitive in resolving amplicon of size difference up to 4 bp. The PCR-CCE method is suitable to detect multiple genetically modified events in a composite DNA sample by tagging their event specific sequences.

  10. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision

    OpenAIRE

    Ender, Andreas; Mehl, Albert

    2013-01-01

    STATEMENT OF PROBLEM: A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. PURPOSE: The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. MATERIAL AND METHODS: A steel reference dentate model was fabricated and measured with a...

  11. Assessing total nitrogen in surface-water samples--precision and bias of analytical and computational methods

    Science.gov (United States)

    Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.

    2013-01-01

    The characterization of total-nitrogen (TN) concentrations is an important component of many surface-water-quality programs. However, three widely used methods for the determination of total nitrogen—(1) derived from the alkaline-persulfate digestion of whole-water samples (TN-A); (2) calculated as the sum of total Kjeldahl nitrogen and dissolved nitrate plus nitrite (TN-K); and (3) calculated as the sum of dissolved nitrogen and particulate nitrogen (TN-C)—all include inherent limitations. A digestion process is intended to convert multiple species of nitrogen that are present in the sample into one measureable species, but this process may introduce bias. TN-A results can be negatively biased in the presence of suspended sediment, and TN-K data can be positively biased in the presence of elevated nitrate because some nitrate is reduced to ammonia and is therefore counted twice in the computation of total nitrogen. Furthermore, TN-C may not be subject to bias but is comparatively imprecise. In this study, the effects of suspended-sediment and nitrate concentrations on the performance of these TN methods were assessed using synthetic samples developed in a laboratory as well as a series of stream samples. A 2007 laboratory experiment measured TN-A and TN-K in nutrient-fortified solutions that had been mixed with varying amounts of sediment-reference materials. This experiment identified a connection between suspended sediment and negative bias in TN-A and detected positive bias in TN-K in the presence of elevated nitrate. A 2009–10 synoptic-field study used samples from 77 stream-sampling sites to confirm that these biases were present in the field samples and evaluated the precision and bias of TN methods. The precision of TN-C and TN-K depended on the precision and relative amounts of the TN-component species used in their respective TN computations. Particulate nitrogen had an average variability (as determined by the relative standard deviation) of 13

  12. Combining within and between instrument information to estimate precision

    International Nuclear Information System (INIS)

    Jost, J.W.; Devary, J.L.; Ward, J.E.

    1980-01-01

    When two instruments, both having replicated measurements, are used to measure the same set of items, between instrument information may be used to augment the within instrument precision estimate. A method is presented which combines the within and between instrument information to obtain an unbiased and minimum variance estimate of instrument precision. The method does not assume the instruments have equal precision

  13. Sampling plans in attribute mode with multiple levels of precision

    International Nuclear Information System (INIS)

    Franklin, M.

    1986-01-01

    This paper describes a method for deriving sampling plans for nuclear material inventory verification. The method presented is different from the classical approach which envisages two levels of measurement precision corresponding to NDA and DA. In the classical approach the precisions of the two measurement methods are taken as fixed parameters. The new approach is based on multiple levels of measurement precision. The design of the sampling plan consists of choosing the number of measurement levels, the measurement precision to be used at each level and the sample size to be used at each level

  14. Standard guide for preparing and interpreting precision and bias statements in test method standards used in the nuclear industry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1992-01-01

    1.1 This guide covers terminology useful for the preparation and interpretation of precision and bias statements. 1.2 In formulating precision and bias statements, it is important to understand the statistical concepts involved and to identify the major sources of variation that affect results. Appendix X1 provides a brief summary of these concepts. 1.3 To illustrate the statistical concepts and to demonstrate some sources of variation, a hypothetical data set has been analyzed in Appendix X2. Reference to this example is made throughout this guide. 1.4 It is difficult and at times impossible to ship nuclear materials for interlaboratory testing. Thus, precision statements for test methods relating to nuclear materials will ordinarily reflect only within-laboratory variation.

  15. A Precision-Positioning Method for a High-Acceleration Low-Load Mechanism Based on Optimal Spatial and Temporal Distribution of Inertial Energy

    Directory of Open Access Journals (Sweden)

    Xin Chen

    2015-09-01

    Full Text Available High-speed and precision positioning are fundamental requirements for high-acceleration low-load mechanisms in integrated circuit (IC packaging equipment. In this paper, we derive the transient nonlinear dynamicresponse equations of high-acceleration mechanisms, which reveal that stiffness, frequency, damping, and driving frequency are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high-acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, we first reviewed the commonly flexible multibody dynamic optimization using equivalent static loads method (ESLM, and then we selected the modified ESLM for optimal spatial distribution of inertial energy; hence, not only the stiffness but also the inertia and frequency of the real modal shapes are considered. For velocity planning, we developed a new velocity-planning method based on nonlinear dynamic-response optimization with varying motion conditions. Our method was verified on a high-acceleration die bonder. The amplitude of residual vibration could be decreased by more than 20% via structural optimization and the positioning time could be reduced by more than 40% via asymmetric variable velocity planning. This method provides an effective theoretical support for the precision positioning of high-acceleration low-load mechanisms.

  16. Methods used by Elsam for monitoring precision and accuracy of analytical results

    Energy Technology Data Exchange (ETDEWEB)

    Hinnerskov Jensen, J [Soenderjyllands Hoejspaendingsvaerk, Faelleskemikerne, Aabenraa (Denmark)

    1996-12-01

    Performing round robins at regular intervals is the primary method used by ELsam for monitoring precision and accuracy of analytical results. The firs round robin was started in 1974, and today 5 round robins are running. These are focused on: boiler water and steam, lubricating oils, coal, ion chromatography and dissolved gases in transformer oils. Besides the power plant laboratories in Elsam, the participants are power plant laboratories from the rest of Denmark, industrial and commercial laboratories in Denmark, and finally foreign laboratories. The calculated standard deviations or reproducibilities are compared with acceptable values. These values originate from ISO, ASTM and the like, or from own experiences. Besides providing the laboratories with a tool to check their momentary performance, the round robins are vary suitable for evaluating systematic developments on a long term basis. By splitting up the uncertainty according to methods, sample preparation/analysis, etc., knowledge can be extracted from the round robins for use in many other situations. (au)

  17. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Science.gov (United States)

    2010-07-01

    ... criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM....584 What are the precision and accuracy criteria for approval of test methods for determining the... available gravimetric sulfur standard in the range of 1-10 ppm sulfur shall not differ from the accepted...

  18. a High Precision dem Extraction Method Based on Insar Data

    Science.gov (United States)

    Wang, Xinshuang; Liu, Lingling; Shi, Xiaoliang; Huang, Xitao; Geng, Wei

    2018-04-01

    In the 13th Five-Year Plan for Geoinformatics Business, it is proposed that the new InSAR technology should be applied to surveying and mapping production, which will become the innovation driving force of geoinformatics industry. This paper will study closely around the new outline of surveying and mapping and then achieve the TerraSAR/TanDEM data of Bin County in Shaanxi Province in X band. The studying steps are as follows; Firstly, the baseline is estimated from the orbital data; Secondly, the interferometric pairs of SAR image are accurately registered; Thirdly, the interferogram is generated; Fourth, the interferometric correlation information is estimated and the flat-earth phase is removed. In order to solve the phase noise and the discontinuity phase existing in the interferometric image of phase, a GAMMA adaptive filtering method is adopted. Aiming at the "hole" problem of missing data in low coherent area, the interpolation method of low coherent area mask is used to assist the phase unwrapping. Then, the accuracy of the interferometric baseline is estimated from the ground control points. Finally, 1 : 50000 DEM is generated, and the existing DEM data is used to verify the accuracy through statistical analysis. The research results show that the improved InSAR data processing method in this paper can obtain the high-precision DEM of the study area, exactly the same with the topography of reference DEM. The R2 can reach to 0.9648, showing a strong positive correlation.

  19. Accuracy and precision in thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    Marshall, T.O.

    1984-01-01

    The question of accuracy and precision in thermoluminescent dosimetry, particularly in relation to lithium fluoride phosphor, is discussed. The more important sources of error, including those due to the detectors, the reader, annealing and dosemeter design, are identified and methods of reducing their effects on accuracy and precision to a minimum are given. Finally, the accuracy and precision achievable for three quite different applications are discussed, namely, for personal dosimetry, environmental monitoring and for the measurement of photon dose distributions in phantoms. (U.K.)

  20. Environment-assisted precision measurement

    DEFF Research Database (Denmark)

    Goldstein, G.; Cappellaro, P.; Maze, J. R.

    2011-01-01

    We describe a method to enhance the sensitivity of precision measurements that takes advantage of the environment of a quantum sensor to amplify the response of the sensor to weak external perturbations. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which...... are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas that are coupled strongly to the sensor qubit; it does not depend on the exact values of the coupling strengths and is resilient to many forms of decoherence. The method...... achieves nearly Heisenberg-limited precision measurement, using a novel class of entangled states. We discuss specific applications to improve clock sensitivity using trapped ions and magnetic sensing based on electronic spins in diamond...

  1. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    Science.gov (United States)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to

  2. Factors affecting the precision of bone mineral measurements

    International Nuclear Information System (INIS)

    Cormack, J.; Evil, C.A.

    1990-01-01

    This paper discusses some statistical aspects of absorptiometric bone mineral measurements. In particular, the contribution of photon counting statistics to overall precision is estimated, and methods available for carrying out statistical comparisons of bone loss and determining their precision are reviewed. The use of replicate measurements as a means of improving measurement precision is also discussed. 11 refs

  3. A rapid and specific titrimetric method for the precise determination of plutonium using redox indicator

    International Nuclear Information System (INIS)

    Chitnis, R.T.; Dubey, S.C.

    1976-01-01

    A simple and rapid method for the determination of plutonium in plutonium nitrate solution and its application to the purex process solutions is discussed. The method involves the oxidation of plutonium to Pu(VI) with the help of argentic oxide followed by the destruction of the excess argentic oxide by means of sulphamic acid. The determination of plutonium is completed by adding ferrous ammonium sulphate solution which reduces Pu(VI) to Pu(IV) and titrating the excess ferrous with standard potassium dichromate solution using sodium diphenylamine sulphonate as the internal indicator. The effect of the various reagents add during the oxidation and reduction of plutonium, on the final titration has been investigated. The method works satisfactorily for the analysis of plutonium in the range of 0.5 to 5 mg. The precision of the method is found to be within 0.1%. (author)

  4. Extending the precision and efficiency of the all-electron full-potential linearized augmented plane-wave density-functional theory method

    International Nuclear Information System (INIS)

    Michalicek, Gregor

    2015-01-01

    Density functional theory (DFT) is the most widely-used first-principles theory for analyzing, describing and predicting the properties of solids based on the fundamental laws of quantum mechanics. The success of the theory is a consequence of powerful approximations to the unknown exchange and correlation energy of the interacting electrons and of sophisticated electronic structure methods that enable the computation of the density functional equations on a computer. A widely used electronic structure method is the full-potential linearized augmented plane-wave (FLAPW) method, that is considered to be one of the most precise methods of its kind and often referred to as a standard. Challenged by the demand of treating chemically and structurally increasingly more complex solids, in this thesis this method is revisited and extended along two different directions: (i) precision and (ii) efficiency. In the full-potential linearized augmented plane-wave method the space of a solid is partitioned into nearly touching spheres, centered at each atom, and the remaining interstitial region between the spheres. The Kohn-Sham orbitals, which are used to construct the electron density, the essential quantity in DFT, are expanded into a linearized augmented plane-wave basis, which consists of plane waves in the interstitial region and angular momentum dependent radial functions in the spheres. In this thesis it is shown that for certain types of materials, e.g., materials with very broad electron bands or large band gaps, or materials that allow the usage of large space-filling spheres, the variational freedom of the basis in the spheres has to be extended in order to represent the Kohn-Sham orbitals with high precision over a large energy spread. Two kinds of additional radial functions confined to the spheres, so-called local orbitals, are evaluated and found to successfully eliminate this error. A new efficient basis set is developed, named linearized augmented lattice

  5. High precision 3D coordinates location technology for pellet

    International Nuclear Information System (INIS)

    Fan Yong; Zhang Jiacheng; Zhou Jingbin; Tang Jun; Xiao Decheng; Wang Chuanke; Dong Jianjun

    2010-01-01

    In inertial confinement fusion (ICF) system, manual way has been used to collimate the pellet traditionally, which is time-consuming and low-level automated. A new method based on Binocular Vision is proposed, which can place the prospecting apparatus on the public diagnosis platform to reach relevant engineering target and uses the high precision two dimension calibration board. Iterative method is adopted to satisfy 0.1 pixel for corner extraction precision. Furthermore, SVD decomposition is used to remove the singularity corners and advanced Zhang's calibration method is applied to promote camera calibration precision. Experiments indicate that the RMS of three dimension coordinate measurement precision is 25 μm, and the max system RMS of distance measurement is better than 100 μm, satisfying the system index requirement. (authors)

  6. Validating precision--how many measurements do we need?

    Science.gov (United States)

    ÅSberg, Arne; Solem, Kristine Bodal; Mikkelsen, Gustav

    2015-10-01

    A quantitative analytical method should be sufficiently precise, i.e. the imprecision measured as a standard deviation should be less than the numerical definition of the acceptable standard deviation. We propose that the entire 90% confidence interval for the true standard deviation shall lie below the numerical definition of the acceptable standard deviation in order to assure that the analytical method is sufficiently precise. We also present power function curves to ease the decision on the number of measurements to make. Computer simulation was used to calculate the probability that the upper limit of the 90% confidence interval for the true standard deviation was equal to or exceeded the acceptable standard deviation. Power function curves were constructed for different scenarios. The probability of failure to assure that the method is sufficiently precise increases with decreasing number of measurements and with increasing standard deviation when the true standard deviation is well below the acceptable standard deviation. For instance, the probability of failure is 42% for a precision experiment of 40 repeated measurements in one analytical run and 7% for 100 repeated measurements, when the true standard deviation is 80% of the acceptable standard deviation. Compared to the CLSI guidelines, validating precision according to the proposed principle is more reliable, but demands considerably more measurements. Using power function curves may help when planning studies to validate precision.

  7. Precision measurement of the e+e- → π+π-(γ) cross-section with ISR method

    International Nuclear Information System (INIS)

    Wang, L.L.

    2009-05-01

    Vacuum polarization integral involves the vector spectral functions which can be experimentally determined. As the dominant uncertainty source to the integral, the precision measurement of the cross section of e + e - → π + π - (γ) as a function of energy from 2π threshold to 3 GeV is performed by taking the ratio of e + e - → π + π - (γ) cross section to e + e - → μ + μ - (γ) cross section which are both measured with BABAR data using ISR method in one analysis. Besides that taking the ratio of the cross sections of the two processes can cancel several systematic uncertainties, the acceptance differences between data and Monte Carlo results are measured using the same data, and the corresponding corrections are applied on the efficiencies predicted by Monte Carlo method which can control the uncertainties. The achieved final uncertainty of the born cross section of e + e - → π + π - (γ) in ρ mass region (0.6 ∼ 0.9 GeV) is 0.54%. As a consequence of the new vacuum polarization calculation using the new precision result of the e + e - π + π - (γ) cross section, the impact on the standard model prediction of muon anomalous magnetic moment g - 2 is presented, which is also compared with other data based predictions and direct measurement. (author)

  8. High-precision surface formation method and the 3-D shaded display of the brain obtained from CT images

    International Nuclear Information System (INIS)

    Niki, Noboru; Fukuda, Hiroshi

    1987-01-01

    Our aim is to display the precise 3-D appearance of the brain based on data provided by CT images. For this purpose, we have developed a method of precisely forming surfaces from brain contours. The method expresses the brain surface as the sum of several partial surfaces. Each partial surface is individually constructed from respective parts of brain contours. The brain surface is finally made up of a superposition of partial surfaces. Two surface formation algorithms based on this principle are presented. One expresses the brain surface as the sum of a brain outline surface and sulcus surfaces. The other expresses the brain surface as the sum of surfaces in the same part of the brain. The effectiveness of these algorithms is shown by evaluation of contours obtained from dog and human brain samples and CT images. The latter algorithm is shown to be superior for high-resolution CT images. Optional cut-away views of the brain constructed by these algorithms are also shown. (author)

  9. Search for transient ultralight dark matter signatures with networks of precision measurement devices using a Bayesian statistics method

    Science.gov (United States)

    Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.

    2018-04-01

    We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.

  10. An Image Analysis Method for the Precise Selection and Quantitation of Fluorescently Labeled Cellular Constituents

    Science.gov (United States)

    Agley, Chibeza C.; Velloso, Cristiana P.; Lazarus, Norman R.

    2012-01-01

    The accurate measurement of the morphological characteristics of cells with nonuniform conformations presents difficulties. We report here a straightforward method using immunofluorescent staining and the commercially available imaging program Adobe Photoshop, which allows objective and precise information to be gathered on irregularly shaped cells. We have applied this measurement technique to the analysis of human muscle cells and their immunologically marked intracellular constituents, as these cells are prone to adopting a highly branched phenotype in culture. Use of this method can be used to overcome many of the long-standing limitations of conventional approaches for quantifying muscle cell size in vitro. In addition, wider applications of Photoshop as a quantitative and semiquantitative tool in immunocytochemistry are explored. PMID:22511600

  11. Precision medicine for psychopharmacology: a general introduction.

    Science.gov (United States)

    Shin, Cheolmin; Han, Changsu; Pae, Chi-Un; Patkar, Ashwin A

    2016-07-01

    Precision medicine is an emerging medical model that can provide accurate diagnoses and tailored therapeutic strategies for patients based on data pertaining to genes, microbiomes, environment, family history and lifestyle. Here, we provide basic information about precision medicine and newly introduced concepts, such as the precision medicine ecosystem and big data processing, and omics technologies including pharmacogenomics, pharamacometabolomics, pharmacoproteomics, pharmacoepigenomics, connectomics and exposomics. The authors review the current state of omics in psychiatry and the future direction of psychopharmacology as it moves towards precision medicine. Expert commentary: Advances in precision medicine have been facilitated by achievements in multiple fields, including large-scale biological databases, powerful methods for characterizing patients (such as genomics, proteomics, metabolomics, diverse cellular assays, and even social networks and mobile health technologies), and computer-based tools for analyzing large amounts of data.

  12. A Method for Precision Closed-Loop Irrigation Using a Modified PID Control Algorithm

    Science.gov (United States)

    Goodchild, Martin; Kühn, Karl; Jenkins, Malcolm; Burek, Kazimierz; Dutton, Andrew

    2016-04-01

    The benefits of closed-loop irrigation control have been demonstrated in grower trials which show the potential for improved crop yields and resource usage. Managing water use by controlling irrigation in response to soil moisture changes to meet crop water demands is a popular approach but requires knowledge of closed-loop control practice. In theory, to obtain precise closed-loop control of a system it is necessary to characterise every component in the control loop to derive the appropriate controller parameters, i.e. proportional, integral & derivative (PID) parameters in a classic PID controller. In practice this is often difficult to achieve. Empirical methods are employed to estimate the PID parameters by observing how the system performs under open-loop conditions. In this paper we present a modified PID controller, with a constrained integral function, that delivers excellent regulation of soil moisture by supplying the appropriate amount of water to meet the needs of the plant during the diurnal cycle. Furthermore, the modified PID controller responds quickly to changes in environmental conditions, including rainfall events which can result in: controller windup, under-watering and plant stress conditions. The experimental work successfully demonstrates the functionality of a constrained integral PID controller that delivers robust and precise irrigation control. Coir substrate strawberry growing trial data is also presented illustrating soil moisture control and the ability to match water deliver to solar radiation.

  13. The Ramsey method in high-precision mass spectrometry with Penning traps Experimental results

    CERN Document Server

    George, S; Herfurth, F; Herlert, A; Kretzschmar, M; Nagy, S; Schwarz, S; Schweikhard, L; Yazidjian, C

    2007-01-01

    The highest precision in direct mass measurements is obtained with Penning trap mass spectrometry. Most experiments use the interconversion of the magnetron and cyclotron motional modes of the stored ion due to excitation by external radiofrequency-quadrupole fields. In this work a new excitation scheme, Ramsey's method of time-separated oscillatory fields, has been successfully tested. It has been shown to reduce significantly the uncertainty in the determination of the cyclotron frequency and thus of the ion mass of interest. The theoretical description of the ion motion excited with Ramsey's method in a Penning trap and subsequently the calculation of the resonance line shapes for different excitation times, pulse structures, and detunings of the quadrupole field has been carried out in a quantum mechanical framework and is discussed in detail in the preceding article in this journal by M. Kretzschmar. Here, the new excitation technique has been applied with the ISOLTRAP mass spectrometer at ISOLDE/CERN fo...

  14. Triple-Frequency GPS Precise Point Positioning Ambiguity Resolution Using Dual-Frequency Based IGS Precise Clock Products

    Directory of Open Access Journals (Sweden)

    Fei Liu

    2017-01-01

    Full Text Available With the availability of the third civil signal in the Global Positioning System, triple-frequency Precise Point Positioning ambiguity resolution methods have drawn increasing attention due to significantly reduced convergence time. However, the corresponding triple-frequency based precise clock products are not widely available and adopted by applications. Currently, most precise products are generated based on ionosphere-free combination of dual-frequency L1/L2 signals, which however are not consistent with the triple-frequency ionosphere-free carrier-phase measurements, resulting in inaccurate positioning and unstable float ambiguities. In this study, a GPS triple-frequency PPP ambiguity resolution method is developed using the widely used dual-frequency based clock products. In this method, the interfrequency clock biases between the triple-frequency and dual-frequency ionosphere-free carrier-phase measurements are first estimated and then applied to triple-frequency ionosphere-free carrier-phase measurements to obtain stable float ambiguities. After this, the wide-lane L2/L5 and wide-lane L1/L2 integer property of ambiguities are recovered by estimating the satellite fractional cycle biases. A test using a sparse network is conducted to verify the effectiveness of the method. The results show that the ambiguity resolution can be achieved in minutes even tens of seconds and the positioning accuracy is in decimeter level.

  15. Microhartree precision in density functional theory calculations

    Science.gov (United States)

    Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia

    2018-04-01

    To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.

  16. Collaborative Study of an Indirect Enzymatic Method for the Simultaneous Analysis of 3-MCPD, 2-MCPD, and Glycidyl Esters in Edible Oils.

    Science.gov (United States)

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Egawa, Yoshitsugu; Kido, Hirotsugu; Kitta, Tadashi; Miyashita, Takashi; Nezu, Toru; Nohara, Hidenori; Sano, Takashi; Takahashi, Yukinari; Taniguchi, Hideji; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2016-07-01

    A collaborative study was conducted to evaluate an indirect enzymatic method for the analysis of fatty acid esters of 3-monochloro-1,2-propanediol (3-MCPD), 2-monochloro-1,3-propanediol (2-MCPD), and glycidol (Gly) in edible oils and fats. The method is characterized by the use of Candida rugosa lipase, which hydrolyzes the esters at room temperature in 30 min. Hydrolysis and bromination steps convert esters of 3-MCPD, 2-MCPD, and glycidol to free 3-MCPD, 2-MCPD, and 3-monobromo-1,2-propanediol, respectively, which are then derivatized with phenylboronic acid, and analyzed by gas chromatography-mass spectrometry. In a collaborative study involving 13 laboratories, liquid palm, solid palm, rapeseed, and rice bran oils spiked with 0.5-4.4 mg/kg of esters of 3-MCPD, 2-MCPD, and Gly were analyzed in duplicate. The repeatability (RSDr) were 3-MCPD, 2-MCPD, and Gly esters in edible oils.

  17. High Precision GNSS Guidance for Field Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ladislav Jurišica

    2012-11-01

    Full Text Available In this paper, we discuss GNSS (Global Navigation Satellite System guidance for field mobile robots. Several GNSS systems and receivers, as well as multiple measurement methods and principles of GNSS systems are examined. We focus mainly on sources of errors and investigate diverse approaches for precise measuring and effective use of GNSS systems for real-time robot localization. The main body of the article compares two GNSS receivers and their measurement methods. We design, implement and evaluate several mathematical methods for precise robot localization.

  18. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    Science.gov (United States)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  19. On Error Estimation in the Conjugate Gradient Method and why it Works in Finite Precision Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2002-01-01

    Roč. 13, - (2002), s. 56-80 ISSN 1068-9613 R&D Projects: GA ČR GA201/02/0595 Institutional research plan: AV0Z1030915 Keywords : conjugate gradient method * Gauss kvadrature * evaluation of convergence * error bounds * finite precision arithmetic * rounding errors * loss of orthogonality Subject RIV: BA - General Mathematics Impact factor: 0.565, year: 2002 http://etna.mcs.kent.edu/volumes/2001-2010/vol13/abstract.php?vol=13&pages=56-80

  20. Accuracy and Precision of a Plane Wave Vector Flow Imaging Method in the Healthy Carotid Artery

    DEFF Research Database (Denmark)

    Jensen, Jonas; Villagómez Hoyos, Carlos Armando; Traberg, Marie Sand

    2018-01-01

    The objective of the study described here was to investigate the accuracy and precision of a plane wave 2-D vector flow imaging (VFI) method in laminar and complex blood flow conditions in the healthy carotid artery. The approach was to study (i) the accuracy for complex flow by comparing...... of laminar flow in vivo. The precision in vivo was calculated as the mean standard deviation (SD) of estimates aligned to the heart cycle and was highest in the center of the common carotid artery (SD = 3.6% for velocity magnitudes and 4.5° for angles) and lowest in the external branch and for vortices (SD...... the velocity field from a computational fluid dynamics (CFD) simulation to VFI estimates obtained from the scan of an anthropomorphic flow phantom and from an in vivo scan; (ii) the accuracy for laminar unidirectional flow in vivo by comparing peak systolic velocities from VFI with magnetic resonance...

  1. Multi-GNSS high-rate RTK, PPP and novel direct phase observation processing method: application to precise dynamic displacement detection

    Science.gov (United States)

    Paziewski, Jacek; Sieradzki, Rafal; Baryla, Radoslaw

    2018-03-01

    This paper provides the methodology and performance assessment of multi-GNSS signal processing for the detection of small-scale high-rate dynamic displacements. For this purpose, we used methods of relative (RTK) and absolute positioning (PPP), and a novel direct signal processing approach. The first two methods are recognized as providing accurate information on position in many navigation and surveying applications. The latter is an innovative method for dynamic displacement determination with the use of GNSS phase signal processing. This method is based on the developed functional model with parametrized epoch-wise topocentric relative coordinates derived from filtered GNSS observations. Current regular kinematic PPP positioning, as well as medium/long range RTK, may not offer coordinate estimates with subcentimeter precision. Thus, extended processing strategies of absolute and relative GNSS positioning have been developed and applied for displacement detection. The study also aimed to comparatively analyze the developed methods as well as to analyze the impact of combined GPS and BDS processing and the dependence of the results of the relative methods on the baseline length. All the methods were implemented with in-house developed software allowing for high-rate precise GNSS positioning and signal processing. The phase and pseudorange observations collected with a rate of 50 Hz during the field test served as the experiment’s data set. The displacements at the rover station were triggered in the horizontal plane using a device which was designed and constructed to ensure a periodic motion of GNSS antenna with an amplitude of ~3 cm and a frequency of ~4.5 Hz. Finally, a medium range RTK, PPP, and direct phase observation processing method demonstrated the capability of providing reliable and consistent results with the precision of the determined dynamic displacements at the millimeter level. Specifically, the research shows that the standard deviation of

  2. Method and system for detecting, in real time, the imbalance of the head in a high-precision rotary mechanism

    OpenAIRE

    Toro Matamoros, Raúl Mario del; Schmittdiel, Michael Charles; Haber Guerra, Rodolfo E.

    2008-01-01

    [EN] The invention relates to a method for detecting, in real time, an imbalance of the head in a high-precision rotary mechanism, and to the system for carrying out said method. The method comprises the following steps: a) the signal X(t) corresponding to the acceleration of the vibrations of the head is acquired by means of an acquisition means at a sampling rate FS; and b) it is determined, from the signal X(t) obtained, whether the head is imbalanced.

  3. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    International Nuclear Information System (INIS)

    Gleisberg, Tanju

    2008-01-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  4. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Gleisberg, Tanju

    2008-07-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  5. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    Directory of Open Access Journals (Sweden)

    Junhui Huang

    2016-12-01

    Full Text Available Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  6. Estimating maneuvers for precise relative orbit determination using GPS

    Science.gov (United States)

    Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs

    2017-01-01

    Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.

  7. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    Science.gov (United States)

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  8. The economic case for precision medicine.

    Science.gov (United States)

    Gavan, Sean P; Thompson, Alexander J; Payne, Katherine

    2018-01-01

    Introduction : The advancement of precision medicine into routine clinical practice has been highlighted as an agenda for national and international health care policy. A principle barrier to this advancement is in meeting requirements of the payer or reimbursement agency for health care. This special report aims to explain the economic case for precision medicine, by accounting for the explicit objectives defined by decision-makers responsible for the allocation of limited health care resources. Areas covered : The framework of cost-effectiveness analysis, a method of economic evaluation, is used to describe how precision medicine can, in theory, exploit identifiable patient-level heterogeneity to improve population health outcomes and the relative cost-effectiveness of health care. Four case studies are used to illustrate potential challenges when demonstrating the economic case for a precision medicine in practice. Expert commentary : The economic case for a precision medicine should be considered at an early stage during its research and development phase. Clinical and economic evidence can be generated iteratively and should be in alignment with the objectives and requirements of decision-makers. Programmes of further research, to demonstrate the economic case of a precision medicine, can be prioritized by the extent that they reduce the uncertainty expressed by decision-makers.

  9. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  10. Precision Oncology: Between Vaguely Right and Precisely Wrong.

    Science.gov (United States)

    Brock, Amy; Huang, Sui

    2017-12-01

    Precision Oncology seeks to identify and target the mutation that drives a tumor. Despite its straightforward rationale, concerns about its effectiveness are mounting. What is the biological explanation for the "imprecision?" First, Precision Oncology relies on indiscriminate sequencing of genomes in biopsies that barely represent the heterogeneous mix of tumor cells. Second, findings that defy the orthodoxy of oncogenic "driver mutations" are now accumulating: the ubiquitous presence of oncogenic mutations in silent premalignancies or the dynamic switching without mutations between various cell phenotypes that promote progression. Most troublesome is the observation that cancer cells that survive treatment still will have suffered cytotoxic stress and thereby enter a stem cell-like state, the seeds for recurrence. The benefit of "precision targeting" of mutations is inherently limited by this counterproductive effect. These findings confirm that there is no precise linear causal relationship between tumor genotype and phenotype, a reminder of logician Carveth Read's caution that being vaguely right may be preferable to being precisely wrong. An open-minded embrace of the latest inconvenient findings indicating nongenetic and "imprecise" phenotype dynamics of tumors as summarized in this review will be paramount if Precision Oncology is ultimately to lead to clinical benefits. Cancer Res; 77(23); 6473-9. ©2017 AACR . ©2017 American Association for Cancer Research.

  11. Precision siting of a particle accelerator

    International Nuclear Information System (INIS)

    Cintra, Jorge Pimentel

    1996-01-01

    Precise location is a specific survey job that involves a high skilled work to avoid unrecoverable results at the project installation. As a function of the different process stages, different specifications can be applied, invoking different instruments: theodolite, measurement tape, distanciometer, invar wire. This paper, based on experience obtained at the installation of particle accelerator equipment, deals with general principles of precise location: tolerance definitions, increasing accuracy techniques, schedule of locations, sensitivity analysis, quality control methods. (author)

  12. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    Directory of Open Access Journals (Sweden)

    Hendra Gunawan

    2014-06-01

    Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.    

  13. [Precision Nursing: Individual-Based Knowledge Translation].

    Science.gov (United States)

    Chiang, Li-Chi; Yeh, Mei-Ling; Su, Sui-Lung

    2016-12-01

    U.S. President Obama announced a new era of precision medicine in the Precision Medicine Initiative (PMI). This initiative aims to accelerate the progress of personalized medicine in light of individual requirements for prevention and treatment in order to improve the state of individual and public health. The recent and dramatic development of large-scale biologic databases (such as the human genome sequence), powerful methods for characterizing patients (such as genomics, microbiome, diverse biomarkers, and even pharmacogenomics), and computational tools for analyzing big data are maximizing the potential benefits of precision medicine. Nursing science should follow and keep pace with this trend in order to develop empirical knowledge and expertise in the area of personalized nursing care. Nursing scientists must encourage, examine, and put into practice innovative research on precision nursing in order to provide evidence-based guidance to clinical practice. The applications in personalized precision nursing care include: explanations of personalized information such as the results of genetic testing; patient advocacy and support; anticipation of results and treatment; ongoing chronic monitoring; and support for shared decision-making throughout the disease trajectory. Further, attention must focus on the family and the ethical implications of taking a personalized approach to care. Nurses will need to embrace the paradigm shift to precision nursing and work collaboratively across disciplines to provide the optimal personalized care to patients. If realized, the full potential of precision nursing will provide the best chance for good health for all.

  14. Comparison of precise orbit determination methods of zero-difference kinematic, dynamic and reduced-dynamic of GRACE-A satellite using SHORDE software

    Science.gov (United States)

    Li, Kai; Zhou, Xuhua; Guo, Nannan; Zhao, Gang; Xu, Kexin; Lei, Weiwei

    2017-09-01

    Zero-difference kinematic, dynamic and reduced-dynamic precise orbit determination (POD) are three methods to obtain the precise orbits of Low Earth Orbit satellites (LEOs) by using the on-board GPS observations. Comparing the differences between those methods have great significance to establish the mathematical model and is usefull for us to select a suitable method to determine the orbit of the satellite. Based on the zero-difference GPS carrier-phase measurements, Shanghai Astronomical Observatory (SHAO) has improved the early version of SHORDE and then developed it as an integrated software system, which can perform the POD of LEOs by using the above three methods. In order to introduce the function of the software, we take the Gravity Recovery And Climate Experiment (GRACE) on-board GPS observations in January 2008 as example, then we compute the corresponding orbits of GRACE by using the SHORDE software. In order to evaluate the accuracy, we compare the orbits with the precise orbits provided by Jet Propulsion Laboratory (JPL). The results show that: (1) If we use the dynamic POD method, and the force models are used to represent the non-conservative forces, the average accuracy of the GRACE orbit is 2.40cm, 3.91cm, 2.34cm and 5.17cm in radial (R), along-track (T), cross-track (N) and 3D directions respectively; If we use the accelerometer observation instead of non-conservative perturbation model, the average accuracy of the orbit is 1.82cm, 2.51cm, 3.48cm and 4.68cm in R, T, N and 3D directions respectively. The result shows that if we use accelerometer observation instead of the non-conservative perturbation model, the accuracy of orbit is better. (2) When we use the reduced-dynamic POD method to get the orbits, the average accuracy of the orbit is 0.80cm, 1.36cm, 2.38cm and 2.87cm in R, T, N and 3D directions respectively. This method is carried out by setting up the pseudo-stochastic pulses to absorb the errors of atmospheric drag and other

  15. Precise material identification method based on a photon counting technique with correction of the beam hardening effect in X-ray spectra

    International Nuclear Information System (INIS)

    Kimoto, Natsumi; Hayashi, Hiroaki; Asahara, Takashi; Mihara, Yoshiki; Kanazawa, Yuki; Yamakawa, Tsutomu; Yamamoto, Shuichiro; Yamasaki, Masashi; Okada, Masahiro

    2017-01-01

    The aim of our study is to develop a novel material identification method based on a photon counting technique, in which the incident and penetrating X-ray spectra are analyzed. Dividing a 40 kV X-ray spectra into two energy regions, the corresponding linear attenuation coefficients are derived. We can identify the materials precisely using the relationship between atomic number and linear attenuation coefficient through the correction of the beam hardening effect of the X-ray spectra. - Highlights: • We propose a precise material identification method to be used as a photon counting system. • Beam hardening correction is important, even when the analysis is applied to the short energy regions in the X-ray spectrum. • Experiments using a single probe-type CdTe detector were performed, and Monte Carlo simulation was also carried out. • We described the applicability of our method for clinical diagnostic X-ray imaging in the near future.

  16. Assessing the optimized precision of the aircraft mass balance method for measurement of urban greenhouse gas emission rates through averaging

    Directory of Open Access Journals (Sweden)

    Alexie M. F. Heimburger

    2017-06-01

    Full Text Available To effectively address climate change, aggressive mitigation policies need to be implemented to reduce greenhouse gas emissions. Anthropogenic carbon emissions are mostly generated from urban environments, where human activities are spatially concentrated. Improvements in uncertainty determinations and precision of measurement techniques are critical to permit accurate and precise tracking of emissions changes relative to the reduction targets. As part of the INFLUX project, we quantified carbon dioxide (CO2, carbon monoxide (CO and methane (CH4 emission rates for the city of Indianapolis by averaging results from nine aircraft-based mass balance experiments performed in November-December 2014. Our goal was to assess the achievable precision of the aircraft-based mass balance method through averaging, assuming constant CO2, CH4 and CO emissions during a three-week field campaign in late fall. The averaging method leads to an emission rate of 14,600 mol/s for CO2, assumed to be largely fossil-derived for this period of the year, and 108 mol/s for CO. The relative standard error of the mean is 17% and 16%, for CO2 and CO, respectively, at the 95% confidence level (CL, i.e. a more than 2-fold improvement from the previous estimate of ~40% for single-flight measurements for Indianapolis. For CH4, the averaged emission rate is 67 mol/s, while the standard error of the mean at 95% CL is large, i.e. ±60%. Given the results for CO2 and CO for the same flight data, we conclude that this much larger scatter in the observed CH4 emission rate is most likely due to variability of CH4 emissions, suggesting that the assumption of constant daily emissions is not correct for CH4 sources. This work shows that repeated measurements using aircraft-based mass balance methods can yield sufficient precision of the mean to inform emissions reduction efforts by detecting changes over time in urban emissions.

  17. Development and Validation of a Precise Method for Determination of Benzalkonium Chloride (BKC Preservative, in Pharmaceutical Formulation of Latanoprost Eye Drops

    Directory of Open Access Journals (Sweden)

    J. Mehta

    2010-01-01

    Full Text Available A simple and precise reversed phase high performance liquid chromatographic method has been developed and validated for the quantification of benzalkonium chloride (BKC preservative in pharmaceutical formulation of latanoprost eye drops. The analyte was chromatographed on a Waters Spherisorb CN, (4.6×250 mm column packed with particles of 5 μm. The mobile phase, optimized through an experimental design, was a 40:60 (v/v mixture of potassium dihydrogen orthophosphate buffer (pH 5.5 and acetonitrile, pumped at a flow rate of 1.0 mL/min at maintaining column temperature at 30 °C. Maximum UV detection was achieved at 210 nm. The method was validated in terms of linearity, repeatability, intermediate precision and method accuracy. The method was shown to be robust, resisting to small deliberate changes in pH, flow rate and composition (organic ratio of the mobile phase. The method was successfully applied for the determination of BKC in a pharmaceutical formulation of latanoprost ophthalmic solution without any interference from common excipients and drug substance. All the validation parameters were within the acceptance range, concordant to ICH guidelines.

  18. International comparison of methods to test the validity of dead-time and pile-up corrections for high-precision. gamma. -ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Houtermans, H.; Schaerf, K.; Reichel, F. (International Atomic Energy Agency, Vienna (Austria)); Debertin, K. (Physikalisch-Technische Bundesanstalt, Braunschweig (Germany, F.R.))

    1983-02-01

    The International Atomic Energy Agency organized an international comparison of methods applied in high-precision ..gamma..-ray spectrometry for the correction of dead-time and pile-up losses. Results of this comparison are reported and discussed.

  19. High-precision solution to the moving load problem using an improved spectral element method

    Science.gov (United States)

    Wen, Shu-Rui; Wu, Zhi-Jing; Lu, Nian-Li

    2018-02-01

    In this paper, the spectral element method (SEM) is improved to solve the moving load problem. In this method, a structure with uniform geometry and material properties is considered as a spectral element, which means that the element number and the degree of freedom can be reduced significantly. Based on the variational method and the Laplace transform theory, the spectral stiffness matrix and the equivalent nodal force of the beam-column element are established. The static Green function is employed to deduce the improved function. The proposed method is applied to two typical engineering practices—the one-span bridge and the horizontal jib of the tower crane. The results have revealed the following. First, the new method can yield extremely high-precision results of the dynamic deflection, the bending moment and the shear force in the moving load problem. In most cases, the relative errors are smaller than 1%. Second, by comparing with the finite element method, one can obtain the highly accurate results using the improved SEM with smaller element numbers. Moreover, the method can be widely used for statically determinate as well as statically indeterminate structures. Third, the dynamic deflection of the twin-lift jib decreases with the increase in the moving load speed, whereas the curvature of the deflection increases. Finally, the dynamic deflection, the bending moment and the shear force of the jib will all increase as the magnitude of the moving load increases.

  20. Development and Validation of a Precise and Stability Indicating LC Method for the Determination of Benzalkonium Chloride in Pharmaceutical Formulation Using an Experimental Design

    Directory of Open Access Journals (Sweden)

    Harshal K. Trivedi

    2010-01-01

    Full Text Available A simple, precise, shorter runtime and stability indicating reverse-phase high performance liquid chromatographic method has been developed and validated for the quantification of benzalkonium chloride (BKC preservative in pharmaceutical formulation of sparfloxacin eye drop. The method was successfully applied for determination of benzalkonium chloride in various ophthalmic formulations like latanoprost, timolol, dexametasone, gatifloxacin, norfloxacin, combination of moxifloxacin and dexamethasone, combination of nepthazoline HCl, zinc sulphate and chlorpheniramine maleate, combination of tobaramycin and dexamethasone, combination of phenylephrine HCl, naphazoline HCl, menthol and camphor. The RP-LC separation was achieved on an Purospher Star RP-18e 75 mm × 4.0 mm, 3.0 μ in the isocratic mode using buffer: acetonitrile (35: 65, v/v, as the mobile phase at a flow rate of 1.8 mL/min. The methods were performed at 215 nm; in LC method, quantification was achieved with PDA detection over the concentration range of 50 to 150 μg/mL. The method is effective to separate four homologs with good resolution in presence of excipients, sparfloxacin and degradable compound due to sparfloxacin and BKC within five minutes. The method was validated and the results were compared statistically. They were found to be simple, accurate, precise and specific. The proposed method was validated in terms of specificity, precision, recovery, solution stability, linearity and range. All the validation parameters were within the acceptance range and concordant to ICH guidelines.

  1. Inter-lab comparison of precision and recommended methods for age estimation of Florida manatee (Trichechus manatus latirostris) using growth layer groups in earbones

    OpenAIRE

    Brill, Katherine; Marmontel, Miriam; Bolen-Richardson, Meghan; Stewart, Robert EA

    2016-01-01

    Manatees are routinely aged by counting Growth Layer Groups (GLGs) in periotic bones (earbones). Manatee carcasses recovered in Florida between 1974 and 2010 provided age-estimation material for three readers and formed the base for a retrospective analysis of aging precision (repeatability). All readers were in good agreement (high precision) with the greatest apparent source of variation being the result of earbone remodelling with increasing manatee age. Over the same period, methods of sa...

  2. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    Science.gov (United States)

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  3. Sensitive method for precise measurement of endogenous angiotensins I, II and III in human plasma

    International Nuclear Information System (INIS)

    Kawamura, M.; Yoshida, K.; Akabane, S.

    1987-01-01

    We measured endogenous angiotensins (ANGs) I, IIandIII using a system of extraction by Sep-Pak column followed by high performance liquid chromatography (HPLC) combined with radioimmunoassay (RIA). An excellent separation of ANGs was obtained by HPLC. The recovery of ANGs I, IIandIII was 80-84%, when these authentic peptides were added to 6 ml of plasma. The coefficient of variation of the ANGs was 0.04-0.09 for intra-assay and 0.08-0.13 for inter-assay, thereby indicating a good reproducibility. Plasma ANGs I, IIandIII measured by this method in 5 normal volunteers were 51,4.5 and 1.2 pg/ml. In the presence of captopril, ANGs IIandIII decreased by 84% and 77%, respectively, while ANG I increased 5.1 times. This method is therefore useful to assess the precise levels of plasma ANGs

  4. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    Science.gov (United States)

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (psmartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (psmartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (psmartphone system (psmartphone and McMaster counts did not have statistically different accuracies

  5. A new method for precise determination of iron, zinc and cadmium stable isotope ratios in seawater by double-spike mass spectrometry

    International Nuclear Information System (INIS)

    Conway, Tim M.; Rosenberg, Angela D.; Adkins, Jess F.; John, Seth G.

    2013-01-01

    Graphical abstract: ‘Metal-free’ seawater doped with varying concentrations of ‘zero’ isotope standards, processed through our simultaneous method, and then analyzed by double spike MC-ICPMS for Fe, Zn and Cd isotope ratios. All values were determined within 2 σ error (error bars shown) of zero. -- Highlights: •The first simultaneous method for isotopic analysis of Fe, Zn and Cd in seawater. •Designed for 1 L samples, a 1–20 fold improvement over previous methods. •Low blanks and high precision allow measurement of low concentration samples. •Small volume and fast processing are ideal for high-resolution large-scale studies. •Will facilitate investigation of marine trace-metal isotope cycling. -- Abstract: The study of Fe, Zn and Cd stable isotopes (δ 56 Fe, δ 66 Zn and δ 114 Cd) in seawater is a new field, which promises to elucidate the marine cycling of these bioactive trace metals. However, the analytical challenges posed by the low concentration of these metals in seawater has meant that previous studies have typically required large sample volumes, highly limiting data collection in the oceans. Here, we present the first simultaneous method for the determination of these three isotope systems in seawater, using Nobias PA-1 chelating resin to extract metals from seawater, purification by anion exchange chromatography, and analysis by double spike MC-ICPMS. This method is designed for use on only a single litre of seawater and has blanks of 0.3, 0.06 and <0.03 ng for Fe, Zn and Cd respectively, representing a 1–20 fold reduction in sample size and a 4–130 decrease in blank compared to previously reported methods. The procedure yields data with high precision for all three elements (typically 0.02–0.2‰; 1σ internal precision), allowing us to distinguish natural variability in the oceans, which spans 1–3‰ for all three isotope systems. Simultaneous extraction and purification of three metals makes this method ideal for high

  6. Precision Learning Assessment: An Alternative to Traditional Assessment Techniques.

    Science.gov (United States)

    Caltagirone, Paul J.; Glover, Christopher E.

    1985-01-01

    A continuous and curriculum-based assessment method, Precision Learning Assessment (PLA), which integrates precision teaching and norm-referenced techniques, was applied to a math computation curriculum for 214 third graders. The resulting districtwide learning curves defining average annual progress through the computation curriculum provided…

  7. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  8. Precision phase estimation based on weak-value amplification

    Science.gov (United States)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  9. Development of Precise Point Positioning Method Using Global Positioning System Measurements

    Directory of Open Access Journals (Sweden)

    Byung-Kyu Choi

    2011-09-01

    Full Text Available Precise point positioning (PPP is increasingly used in several parts such as monitoring of crustal movement and maintaining an international terrestrial reference frame using global positioning system (GPS measurements. An accuracy of PPP data processing has been increased due to the use of the more precise satellite orbit/clock products. In this study we developed PPP algorithm that utilizes data collected by a GPS receiver. The measurement error modelling including the tropospheric error and the tidal model in data processing was considered to improve the positioning accuracy. The extended Kalman filter has been also employed to estimate the state parameters such as positioning information and float ambiguities. For the verification, we compared our results to other of International GNSS Service analysis center. As a result, the mean errors of the estimated position on the East-West, North-South and Up-Down direction for the five days were 0.9 cm, 0.32 cm, and 1.14 cm in 95% confidence level.

  10. A High Precision Comprehensive Evaluation Method for Flood Disaster Loss Based on Improved Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yuliang; LU Guihua; JIN Juliang; TONG Fang; ZHOU Ping

    2006-01-01

    Precise comprehensive evaluation of flood disaster loss is significant for the prevention and mitigation of flood disasters. Here, one of the difficulties involved is how to establish a model capable of describing the complex relation between the input and output data of the system of flood disaster loss. Genetic programming (GP) solves problems by using ideas from genetic algorithm and generates computer programs automatically. In this study a new method named the evaluation of the grade of flood disaster loss (EGFD) on the basis of improved genetic programming (IGP) is presented (IGPEGFD). The flood disaster area and the direct economic loss are taken as the evaluation indexes of flood disaster loss. Obviously that the larger the evaluation index value, the larger the corresponding value of the grade of flood disaster loss is. Consequently the IGP code is designed to make the value of the grade of flood disaster be an increasing function of the index value. The result of the application of the IGP-EGFD model to Henan Province shows that a good function expression can be obtained within a bigger searched function space; and the model is of high precision and considerable practical significance.Thus, IGP-EGFD can be widely used in automatic modeling and other evaluation systems.

  11. High precision detector robot arm system

    Science.gov (United States)

    Shu, Deming; Chu, Yong

    2017-01-31

    A method and high precision robot arm system are provided, for example, for X-ray nanodiffraction with an X-ray nanoprobe. The robot arm system includes duo-vertical-stages and a kinematic linkage system. A two-dimensional (2D) vertical plane ultra-precision robot arm supporting an X-ray detector provides positioning and manipulating of the X-ray detector. A vertical support for the 2D vertical plane robot arm includes spaced apart rails respectively engaging a first bearing structure and a second bearing structure carried by the 2D vertical plane robot arm.

  12. A new measurement method of actual focal spot position of an x-ray tube using a high-precision carbon-interspaced grid

    Science.gov (United States)

    Lee, H. W.; Lim, H. W.; Jeon, D. H.; Park, C. K.; Cho, H. S.; Seo, C. W.; Lee, D. Y.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Woo, T. H.; Oh, J. E.

    2018-06-01

    This study investigated the effectiveness of a new method for measuring the actual focal spot position of a diagnostic x-ray tube using a high-precision antiscatter grid and a digital x-ray detector in which grid magnification, which is directly related to the focal spot position, was determined from the Fourier spectrum of the acquired x-ray grid’s image. A systematic experiment was performed to demonstrate the viability of the proposed measurement method. The hardware system used in the experiment consisted of an x-ray tube run at 50 kVp and 1 mA, a flat-panel detector with a pixel size of 49.5 µm, and a high-precision carbon-interspaced grid with a strip density of 200 lines/inch. The results indicated that the focal spot of the x-ray tube (Jupiter 5000, Oxford Instruments) used in the experiment was located approximately 31.10 mm inside from the exit flange, well agreed with the nominal value of 31.05 mm, which demonstrates the viability of the proposed measurement method. Thus, the proposed method can be utilized for system’s performance optimization in many x-ray imaging applications.

  13. Inter-lab comparison of precision and recommended methods for age estimation of Florida manatee (Trichechus manatus latirostris using growth layer groups in earbones

    Directory of Open Access Journals (Sweden)

    Katherine Brill

    2016-06-01

    Full Text Available Manatees are routinely aged by counting Growth Layer Groups (GLGs in periotic bones (earbones. Manatee carcasses recovered in Florida between 1974 and 2010 provided age-estimation material for three readers and formed the base for a retrospective analysis of aging precision (repeatability. All readers were in good agreement (high precision with the greatest apparent source of variation being the result of earbone remodelling with increasing manatee age. Over the same period, methods of sample preparation and of determining a final age estimate changed. We examined the effects of altering methods on ease of reading GLGs and found no statistical differences. Accurate age estimates are an important component for effective management of the species and for better models of population trends and we summarize the currently recommended methods for estimating manatee ages using earbones.

  14. Research on Ship Trajectory Tracking with High Precision Based on LOS

    Directory of Open Access Journals (Sweden)

    Hengzhi Liu

    2018-01-01

    Full Text Available Aiming at how precise to track by LOS, a method is proposed. The method combines the advantages of LOS simplicity and intuition, easy parameter setting and good convergence, with the features of GPC softening, multi-step prediction, rolling optimization and excellent controllability and robustness. In order to verify the effectiveness of the method, the method is simulated by Matlab. The simulation’s results show that it makes ship tracking highly precise.

  15. Mechanics and Physics of Precise Vacuum Mechanisms

    CERN Document Server

    Deulin, E. A; Panfilov, Yu V; Nevshupa, R. A

    2010-01-01

    In this book the Russian expertise in the field of the design of precise vacuum mechanics is summarized. A wide range of physical applications of mechanism design in electronic, optical-electronic, chemical, and aerospace industries is presented in a comprehensible way. Topics treated include the method of microparticles flow regulation and its determination in vacuum equipment and mechanisms of electronics; precise mechanisms of nanoscale precision based on magnetic and electric rheology; precise harmonic rotary and not-coaxial nut-screw linear motion vacuum feedthroughs with technical parameters considered the best in the world; elastically deformed vacuum motion feedthroughs without friction couples usage; the computer system of vacuum mechanisms failure predicting. This English edition incorporates a number of features which should improve its usefulness as a textbook without changing the basic organization or the general philosophy of presentation of the subject matter of the original Russian work. Exper...

  16. The application of integrated geophysical methods composed of AMT and high-precision ground magnetic survey to the exploration of granite uranium deposits

    International Nuclear Information System (INIS)

    Qiao Yong; Shen Jingbang; Wu Yong; Wang Zexia

    2014-01-01

    Introduced two methods composed of AMT and high-precision ground magnetic survey were used to the exploration of granite uranium deposits in the Yin gongshan areas middle part of the Nei Monggol. Through experiment of methods and analysis of applicated results, think that AMT have good vertical resolution and could preferably survey thickness of rockmass, position of fracture and deep conditions, space distribution features of fracture zone ect, but it is not clear for rockmass, xenolith of reflection. And high-precision ground magnetic survey could delineate rockmass, xenolith of distribution range and identify the rock contact zone, fracture ect, but it generally measure position and it is not clear for occurrence, extension. That can resolve some geological structures by using the integrated methods and on the basis of sharing their complementary advantages. Effective technological measures are provided to the exploration of deep buried uranium bodies in the granite uranium deposits and outskirt extension of the deposit. (authors)

  17. Determination of fat, moisture, and protein in meat and meat products by using the FOSS FoodScan Near-Infrared Spectrophotometer with FOSS Artificial Neural Network Calibration Model and Associated Database: collaborative study.

    Science.gov (United States)

    Anderson, Shirley

    2007-01-01

    A collaborative study was conducted to evaluate the repeatability and reproducibility of the FOSS FoodScan near-infrared spectrophotometer with artificial neural network calibration model and database for the determination of fat, moisture, and protein in meat and meat products. Representative samples were homogenized by grinding according to AOAC Official Method 983.18. Approximately 180 g ground sample was placed in a 140 mm round sample dish, and the dish was placed in the FoodScan. The operator ID was entered, the meat product profile within the software was selected, and the scanning process was initiated by pressing the "start" button. Results were displayed for percent (g/100 g) fat, moisture, and protein. Ten blind duplicate samples were sent to 15 collaborators in the United States. The within-laboratory (repeatability) relative standard deviation (RSD(r)) ranged from 0.22 to 2.67% for fat, 0.23 to 0.92% for moisture, and 0.35 to 2.13% for protein. The between-laboratories (reproducibility) relative standard deviation (RSD(R)) ranged from 0.52 to 6.89% for fat, 0.39 to 1.55% for moisture, and 0.54 to 5.23% for protein. The method is recommended for Official First Action.

  18. Determination of ametoctradin residue in fruits and vegetables by modified quick, easy, cheap, effective, rugged, and safe method using ultra-performance liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Hu, Mingfeng; Liu, Xingang; Dong, Fengshou; Xu, Jun; Li, Shasha; Xu, Hanqing; Zheng, Yongquan

    2015-05-15

    A rapid, effective and sensitive method to quantitatively determine ametoctradin residue in apple, cucumber, cabbage, tomato and grape was developed and validated using ultra-performance liquid chromatography coupled with tandem mass spectrometry (UPLC-MS/MS). The target compound was determined in less than 5.0 min using an electrospray ionisation source in positive mode (ESI+). The limit of detection was below 0.043 μg kg(-1), whereas the limits of quantification did not exceed 0.135 μg kg(-1) in all five matrices. The method showed excellent linearity (R(2)>0.9969) for the target compound. Recovery studies were performed in all matrices at three spiked levels (1, 10 and 100 μg L(-1)). The mean recoveries from five matrices ranged from 81.81% to 100.1%, with intra-day relative standard deviations (RSDr) in the range of 0.65-7.88% for the test compound. This method will be useful for the quick and routine detection of ametoctradin residues in potato, grape, cucumber, apple and tomato. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Long-term in-vitro precision of direct digital X-ray radiogrammetry

    International Nuclear Information System (INIS)

    Dhainaut, Alvilde; Hoff, Mari; Kaelvesten, Johan; Lydersen, Stian; Forslind, Kristina; Haugeberg, Glenn

    2011-01-01

    Digital X-ray radiogrammetry (DXR) calculates peripheral bone mineral density (BMD) from hand radiographs. The short-term precision for direct DXR has been reported to be highly satisfactory. However, long-term precision for this method has not been examined. Thus, the aim of this study was to examine the long-term in-vitro precision for the new direct digital version of DXR. The in-vitro precision for direct DXR was tested with cadaver phantoms on four different X-ray systems at baseline, 3 months, 6 months, and in one machine also at 12 months. At each time point, 31 measurements were performed. The in-vitro longitudinal precision for the four radiographic systems ranged from 0.22 to 0.43% expressed as coefficient of variation (CV%). The smallest detectable difference (SDD) ranged from 0.0034 to 0.0054 g/cm 2 . The in vitro long-term precision for direct DXR was comparable to the previous reported short-term in-vitro precision for all tested X-ray systems. These data show that DXR is a stable method for detecting small changes in bone density during 6-12 months of follow-up. (orig.)

  20. Precise Plan in the analysis of volume precision in SynergyTM conebeam CT image

    International Nuclear Information System (INIS)

    Bai Sen; Xu Qingfeng; Zhong Renming; Jiang Xiaoqin; Jiang Qingfeng; Xu Feng

    2007-01-01

    Objective: A method of checking the volume precision in Synergy TM conebeam CT image. Methods: To scan known phantoms (big, middle, small spheres, cubes and cuniform cavum) at different positions (CBCT centre and departure centre from 5, 8, 10 cm along the accelerator G-T way)with conebeam CT, the phantom volume of reconstructed images were measure. Then to compared measured volume of Synergy TM conebeam CT with fanbeam CT results and nominal values. Results: The middle spheres had 1.5% discrepancy in nominal values and metrical average values at CBCT centre and departure from centre 5, 8 cm along accelerator G-T way. The small spheres showed 8.1%, with 0.8 % of the big cube and 2.9% of small cube, in nominal values and metrical average values at CBCT centre and departure from centre 5, 8, 10 cm along the accelerator G-T way. Conclusion: In valid scan range of Synergy TM conebeam CT, reconstructed precision is independent of the distance deviation from the center. (authors)

  1. PRECISE - pregabalin in addition to usual care: Statistical analysis plan

    NARCIS (Netherlands)

    S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)

    2016-01-01

    textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double

  2. A flexible fluorescence correlation spectroscopy based method for quantification of the DNA double labeling efficiency with precision control

    International Nuclear Information System (INIS)

    Hou, Sen; Tabaka, Marcin; Sun, Lili; Trochimczyk, Piotr; Kaminski, Tomasz S; Kalwarczyk, Tomasz; Zhang, Xuzhu; Holyst, Robert

    2014-01-01

    We developed a laser-based method to quantify the double labeling efficiency of double-stranded DNA (dsDNA) in a fluorescent dsDNA pool with fluorescence correlation spectroscopy (FCS). Though, for quantitative biochemistry, accurate measurement of this parameter is of critical importance, before our work it was almost impossible to quantify what percentage of DNA is doubly labeled with the same dye. The dsDNA is produced by annealing complementary single-stranded DNA (ssDNA) labeled with the same dye at 5′ end. Due to imperfect ssDNA labeling, the resulting dsDNA is a mixture of doubly labeled dsDNA, singly labeled dsDNA and unlabeled dsDNA. Our method allows the percentage of doubly labeled dsDNA in the total fluorescent dsDNA pool to be measured. In this method, we excite the imperfectly labeled dsDNA sample in a focal volume of <1 fL with a laser beam and correlate the fluctuations of the fluorescence signal to get the FCS autocorrelation curves; we express the amplitudes of the autocorrelation function as a function of the DNA labeling efficiency; we perform a comparative analysis of a dsDNA sample and a reference dsDNA sample, which is prepared by increasing the total dsDNA concentration c (c > 1) times by adding unlabeled ssDNA during the annealing process. The method is flexible in that it allows for the selection of the reference sample and the c value can be adjusted as needed for a specific study. We express the precision of the method as a function of the ssDNA labeling efficiency or the dsDNA double labeling efficiency. The measurement precision can be controlled by changing the c value. (letter)

  3. Precision medicine in myasthenia graves: begin from the data precision

    Science.gov (United States)

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  4. Precision muon physics

    Science.gov (United States)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  5. Utilization of special computerized tomography and nuclear medicine techniques for quality control and for the optimization of combined precision chemotherapy and precision radiation therapy

    International Nuclear Information System (INIS)

    Wiley, A.L. Jr.; Wirtanen, G.W.; Chien, I.-C.

    1984-01-01

    A combination of precision (selective, intra-arterial) chemotherapy and precision radiotherapy can be used for advanced pancreatic, biliary tract, and sarcomatous malignancies. There were some remarkable responses, but also a few poor responses and even some morbidity. Accordingly, methods are developed of pre-selecting those patients whose tumors are likely to respond to such therapy, as well as methods for improving the therapeutic ratio by the rational optimization of combined therapy. Specifically, clinical tumor blood flow characteristics (monitored with nuclear medicine techniques) may provide useful criteria for such selection. The authors also evaluate qualitatively the drug distribution or exposure space with specialized color-coded computerized tomography images, which demonstrate spatially dependent enhancement of intra-arterial contrast in tumor and in adjacent normal tissues. Such clinical data can improve the quality control aspects of intra-arterial chemotherapy administration, as well as the possibility of achievement of a significant therapeutic ratio by the integration of precision chemotherapy and precision radiation therapy. (Auth.)

  6. Application of precise MPD & pressure balance cementing technology

    Directory of Open Access Journals (Sweden)

    Yong Ma

    2018-03-01

    Full Text Available The precise managed pressure drilling (MPD technology is mainly used to deal with the difficulties encountered when oil and gas open hole sections with multiple pressure systems and the strata with narrow safety density window are drilled through. If its liner cementing is carried out according to the conventional method, lost circulation is inevitable in the process of cementing while the displacement efficiency of small-clearance liner cementing is satisfied. If the positive and inverse injection technology is adopted, the cementing quality cannot meet the requirements of later well test engineering of ultradeep wells. In this paper, the cementing operation of Ø114.3 mm liner in Well Longgang 70 which was drilled in the Jiange structure of the Sichuan Basin was taken as an example to explore the application of the cementing technology based on the precise MPD and pressure balancing method to the cementing of long open-hole sections (as long as 859 m with both high and low pressures running through multiple reservoirs. On the one hand, the technical measures were taken specifically to ensure the annulus filling efficiency of slurry and the pressure balance in the whole process of cementing. And on the other hand, the annulus pressure balance was precisely controlled by virtue of precise MPD devices and by injecting heavy weight drilling fluids through central pipes, and thus the wellbore pressure was kept steady in the whole process of cementing in the strata with narrow safety density window. It is indicated that Ø114.3 mm liner cementing in this well is good with qualified pressure tests and no channeling emerges at a funnel during the staged density reduction. It is concluded that this method can enhance the liner cementing quality of complex ultradeep gas wells and improve the wellbore conditions for the later safe well tests of high-pressure gas wells. Keywords: Ultradeep well, Liner cementing, Narrow safety density window, Precise

  7. Precision machining commercialization

    International Nuclear Information System (INIS)

    1978-01-01

    To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee

  8. A new method for precise determination of iron, zinc and cadmium stable isotope ratios in seawater by double-spike mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Conway, Tim M., E-mail: conway.tm@gmail.com [Department of Earth and Ocean Sciences, University of South Carolina, Columbia, SC 29208 (United States); Rosenberg, Angela D. [Department of Earth and Ocean Sciences, University of South Carolina, Columbia, SC 29208 (United States); Adkins, Jess F. [California Institute of Technology, Division of Geological and Planetary Sciences, Pasadena, CA 91125 (United States); John, Seth G. [Department of Earth and Ocean Sciences, University of South Carolina, Columbia, SC 29208 (United States)

    2013-09-02

    Graphical abstract: ‘Metal-free’ seawater doped with varying concentrations of ‘zero’ isotope standards, processed through our simultaneous method, and then analyzed by double spike MC-ICPMS for Fe, Zn and Cd isotope ratios. All values were determined within 2 σ error (error bars shown) of zero. -- Highlights: •The first simultaneous method for isotopic analysis of Fe, Zn and Cd in seawater. •Designed for 1 L samples, a 1–20 fold improvement over previous methods. •Low blanks and high precision allow measurement of low concentration samples. •Small volume and fast processing are ideal for high-resolution large-scale studies. •Will facilitate investigation of marine trace-metal isotope cycling. -- Abstract: The study of Fe, Zn and Cd stable isotopes (δ{sup 56}Fe, δ{sup 66}Zn and δ{sup 114}Cd) in seawater is a new field, which promises to elucidate the marine cycling of these bioactive trace metals. However, the analytical challenges posed by the low concentration of these metals in seawater has meant that previous studies have typically required large sample volumes, highly limiting data collection in the oceans. Here, we present the first simultaneous method for the determination of these three isotope systems in seawater, using Nobias PA-1 chelating resin to extract metals from seawater, purification by anion exchange chromatography, and analysis by double spike MC-ICPMS. This method is designed for use on only a single litre of seawater and has blanks of 0.3, 0.06 and <0.03 ng for Fe, Zn and Cd respectively, representing a 1–20 fold reduction in sample size and a 4–130 decrease in blank compared to previously reported methods. The procedure yields data with high precision for all three elements (typically 0.02–0.2‰; 1σ internal precision), allowing us to distinguish natural variability in the oceans, which spans 1–3‰ for all three isotope systems. Simultaneous extraction and purification of three metals makes this method ideal

  9. GTCBio's Precision Medicine Conference (July 7-8, 2016 - Boston, Massachusetts, USA).

    Science.gov (United States)

    Cole, P

    2016-09-01

    GTCBio's Precision Medicine Conference met this year to outline the many steps forward that precision medicine and individualized genomics has made and the challenges it still faces in technological, modeling, and standards development, interoperability and compatibility advancements, and methods of economic and societal adoption. The conference was split into four sections, 'Overcoming Challenges in the Commercialization of Precision Medicine', 'Implementation of Precision Medicine: Strategies & Technologies', 'Integrating & Interpreting Personal Genomics, Big Data, & Bioinformatics' and 'Incentivizing Precision Medicine: Regulation & Reimbursement', with this report focusing on the final two subjects. Copyright 2016 Prous Science, S.A.U. or its licensors. All rights reserved.

  10. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    Science.gov (United States)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by

  11. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    Science.gov (United States)

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  12. Precise Calculation of Complex Radioactive Decay Chains

    National Research Council Canada - National Science Library

    Harr, Logan J

    2007-01-01

    ...). An application of the exponential moments function is used with a transmutation matrix in the calculation of complex radioactive decay chains to achieve greater precision than can be attained through current methods...

  13. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  14. Why precision?

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes

    2012-05-15

    Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.

  15. Why precision?

    International Nuclear Information System (INIS)

    Bluemlein, Johannes

    2012-05-01

    Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.

  16. Characterisation of surface roughness for ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Li Huifen; Cheung, C F; Lee, W B; To, S; Jiang, X Q

    2005-01-01

    Ultra-precision freeform surfaces are widely used in many advanced optics applications which demand for having surface roughness down to nanometer range. Although a lot of research work has been reported on the study of surface generation, reconstruction and surface characterization such as MOTIF and fractal analysis, most of them are focused on axial symmetric surfaces such as aspheric surfaces. Relative little research work has been found in the characterization of surface roughness in ultra-precision freeform surfaces. In this paper, a novel Robust Gaussian Filtering (RGF) method is proposed for the characterisation of surface roughness for ultra-precision freeform surfaces with known mathematic model or a cloud of discrete points. A series of computer simulation and measurement experiments were conducted to verify the capability of the proposed method. The experimental results were found to agree well with the theoretical results

  17. Design and algorithm research of high precision airborne infrared touch screen

    Science.gov (United States)

    Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan

    2016-10-01

    There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.

  18. Precision mechatronics based on high-precision measuring and positioning systems and machines

    Science.gov (United States)

    Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert

    2007-06-01

    Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.

  19. Joint Estimation of Multiple Precision Matrices with Common Structures.

    Science.gov (United States)

    Lee, Wonyul; Liu, Yufeng

    Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained l 1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.

  20. About the problems and perspectives of making precision compressor blades

    Directory of Open Access Journals (Sweden)

    V. E. Galiev

    2014-01-01

    Full Text Available The problems of manufacturing blades with high precision profile geometry are considered in the article. The variant of the technology under development rules out the use of mechanical processing methods for blades airfoil. The article consists of an introduction and six small sections.The introduction sets out the requirements for modern aircraft engines, makes a list of problems arisen in the process of their manufacturing, and marks the relevance of the work.The first section analyzes the existing technology of precision blades. There is an illustration reflecting the stages of the process. Their advantages and disadvantages are marked.The second section provides an illustration, which shows the system-based blades used in the manufacturing process and a model of the work piece using the technology being developed. An analysis of each basing scheme is presented.In the third section we list the existing control methods of geometrical parameters of blades airfoil and present the measurement error data of devices. The special attention is paid to the impossibility to control the accuracy of geometrical parameters of precision blades.The fourth section presents the advantages of the electrochemical machining method with a consistent vibration of tool-electrode and with feeding the pulses of technology current over the traditional method. The article presents data accuracy and surface roughness of the blades airfoil reached owing to precision electrochemical machining. It illustrates machines that implement the given method of processing and components manufactured on them.The fifth section describes the steps of the developed process with justification for the use of the proposed operations.Based on the analysis, the author argues that the application of the proposed process to manufacture the precision compressor blades ensures producing the items that meet the requirements of the drawing.

  1. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  2. High Precision Fast Projective Synchronization for Chaotic Systems with Unknown Parameters

    Science.gov (United States)

    Nian, Fuzhong; Wang, Xingyuan; Lin, Da; Niu, Yujun

    2013-08-01

    A high precision fast projective synchronization method for chaotic systems with unknown parameters was proposed by introducing optimal matrix. Numerical simulations indicate that the precision be improved about three orders compared with other common methods under the same condition of software and hardware. Moreover, when average error is less than 10-3, the synchronization speed is 6500 times than common methods, the iteration needs only 4 times. The unknown parameters also were identified rapidly. The theoretical analysis and proof also were given.

  3. The Analysis of Height System Definition and the High Precision GNSS Replacing Leveling Method

    Directory of Open Access Journals (Sweden)

    ZHANG Chuanyin

    2017-08-01

    Full Text Available Based on the definition of height system, the gravitational equipotential property of height datum surface is discussed in this paper, differences of the heights at ground points that defined in different height systems are tested and analyzed as well. A new method for replacing leveling using GNSS is proposed to ensure the consistency between GNSS replacing leveling and spirit leveling at mm accuracy level. The main conclusions include:①For determining normal height at centimeter accuracy level, the datum surface of normal height should be the geoid. The 1985 national height datum of China adopts normal height system, its datum surface is the geoid passing the Qingdao zero point.②The surface of equi-orthometric height in the near earth space is parallel to the geoid. The combination of GNSS precise positioning and geoid model can be directly used for orthometric height determination. However, the normal height system is more advantageous for describing the terrain and relief.③Based on the proposed method of GNSS replacing leveling, the errors in geodetic height affect more on normal height result than the errors of geoid model, the former is about 1.5 times of the latter.

  4. Longitudinal interfacility precision in single-energy quantitative CT

    International Nuclear Information System (INIS)

    Morin, R.L.; Gray, J.E.; Wahner, H.W.; Weekes, R.G.

    1987-01-01

    The authors investigated the precision of single-energy quantitative CT measurements between two facilities over 3 months. An anthropomorphic phantom with calcium hydroxyapatite inserts (60,100, and 160 mg/cc) was used with the Cann-Gennant method to measure bone mineral density. The same model CT scanner, anthropomorphic phantom, quantitative CT standard and analysis package were utilized at each facility. Acquisition and analysis techniques were identical to those used in patient studies. At one facility, 28 measurements yielded an average precision of 6.1% (5.0%-8.5%). The average precision for 39 measurements at the other facility was 4.3% (3.2%-8.1%). Successive scans with phantom repositioning between scanning yielded an average precision of about 3% (1%-4% without repositioning). Despite differences in personnel, scanners, standards, and phantoms, the variation between facilities was about 2%, which was within the intrafacility variation of about 5% at each location

  5. The theory precision analyse of RFM localization of satellite remote sensing imagery

    Science.gov (United States)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  6. Precision shape modification of nanodevices with a low-energy electron beam

    Science.gov (United States)

    Zettl, Alex; Yuzvinsky, Thomas David; Fennimore, Adam

    2010-03-09

    Methods of shape modifying a nanodevice by contacting it with a low-energy focused electron beam are disclosed here. In one embodiment, a nanodevice may be permanently reformed to a different geometry through an application of a deforming force and a low-energy focused electron beam. With the addition of an assist gas, material may be removed from the nanodevice through application of the low-energy focused electron beam. The independent methods of shape modification and material removal may be used either individually or simultaneously. Precision cuts with accuracies as high as 10 nm may be achieved through the use of precision low-energy Scanning Electron Microscope scan beams. These methods may be used in an automated system to produce nanodevices of very precise dimensions. These methods may be used to produce nanodevices of carbon-based, silicon-based, or other compositions by varying the assist gas.

  7. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  8. Computer-determined assay time based on preset precision

    International Nuclear Information System (INIS)

    Foster, L.A.; Hagan, R.; Martin, E.R.; Wachter, J.R.; Bonner, C.A.; Malcom, J.E.

    1994-01-01

    Most current assay systems for special nuclear materials (SNM) operate on the principle of a fixed assay time which provides acceptable measurement precision without sacrificing the required throughput of the instrument. Waste items to be assayed for SNM content can contain a wide range of nuclear material. Counting all items for the same preset assay time results in a wide range of measurement precision and wastes time at the upper end of the calibration range. A short time sample taken at the beginning of the assay could optimize the analysis time on the basis of the required measurement precision. To illustrate the technique of automatically determining the assay time, measurements were made with a segmented gamma scanner at the Plutonium Facility of Los Alamos National Laboratory with the assay time for each segment determined by counting statistics in that segment. Segments with very little SNM were quickly determined to be below the lower limit of the measurement range and the measurement was stopped. Segments with significant SNM were optimally assays to the preset precision. With this method the total assay time for each item is determined by the desired preset precision. This report describes the precision-based algorithm and presents the results of measurements made to test its validity

  9. High-precision reflectivity measurements: improvements in the calibration procedure

    Science.gov (United States)

    Jupe, Marco; Grossmann, Florian; Starke, Kai; Ristau, Detlev

    2003-05-01

    The development of high quality optical components is heavily depending on precise characterization procedures. The reflectance and transmittance of laser components are the most important parameters for advanced laser applications. In the industrial fabrication of optical coatings, quality management is generally insured by spectral photometric methods according to ISO/DIS 15386 on a medium level of accuracy. Especially for high reflecting mirrors, a severe discrepancy in the determination of the absolute reflectivity can be found for spectral photometric procedures. In the first part of the CHOCLAB project, a method for measuring reflectance and transmittance with an enhanced precision was developed, which is described in ISO/WD 13697. In the second part of the CHOCLAB project, the evaluation and optimization for the presented method is scheduled. Within this framework international Round-Robin experiment is currently in progress. During this Round-Robin experiment, distinct deviations could be observed between the results of high precision measurement facilities of different partners. Based on the extended experiments, the inhomogeneity of the sample reflectivity was identified as one important origin for the deviation. Consequently, this inhomogeneity is also influencing the calibration procedure. Therefore, a method was developed that allows the calibration of the chopper blade using always the same position on the reference mirror. During the investigations, the homogeneity of several samples was characterized by a surface mapping procedure for 1064 nm. The measurement facility was extended to the additional wavelength 532 nm and a similar set-up was assembled at 10.6 μm. The high precision reflectivity procedure at the mentioned wavelengths is demonstrated for exemplary measurements.

  10. High-precision quadruple isotope dilution method for simultaneous determination of nitrite and nitrate in seawater by GCMS after derivatization with triethyloxonium tetrafluoroborate

    Energy Technology Data Exchange (ETDEWEB)

    Pagliano, Enea, E-mail: enea.pagliano@nrc-cnrc.gc.ca; Meija, Juris; Mester, Zoltán

    2014-05-01

    Highlights: • High-precision determination of nitrite and nitrate in seawater. • Use of quadruple isotope dilution. • Aqueous Et₃O⁺BF₄]⁻ derivatization chemistry for GCMS analysis of nitrite and nitrate. Abstract: Quadruple isotope dilution mass spectrometry (ID⁴MS) has been applied for simultaneous determination of nitrite and nitrate in seawater. ID⁴MS allows high-precision measurements and entails the use of isotopic internal standards (¹⁸O-nitrite and ¹⁵N-nitrate). We include a tutorial on ID⁴MS outlining optimal experimental design which generates results with low uncertainties and obviates the need for direct (separate) evaluation of the procedural blank. Nitrite and nitrate detection was achieved using a headspace GCMS procedure based on single-step aqueous derivatization with triethyloxonium tetrafluoroborate at room temperature. In this paper the sample preparation was revised and fundamental aspects of this chemistry are presented. The proposed method has detection limits in the low parts-per-billion for both analytes, is reliable, precise, and has been validated using a seawater certified reference material (MOOS-2). Simplicity of the experimental design, low detection limits, and the use of quadruple isotope dilution makes the present method superior to the state-of-the-art for determination of nitrite and nitrate, and an ideal candidate for reference measurements of these analytes in seawater.

  11. Spike timing precision of neuronal circuits.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2018-04-17

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  12. Practical precision measurement

    International Nuclear Information System (INIS)

    Kwak, Ho Chan; Lee, Hui Jun

    1999-01-01

    This book introduces basic knowledge of precision measurement, measurement of length, precision measurement of minor diameter, measurement of angles, measurement of surface roughness, three dimensional measurement, measurement of locations and shapes, measurement of screw, gear testing, cutting tools testing, rolling bearing testing, and measurement of digitalisation. It covers height gauge, how to test surface roughness, measurement of plan and straightness, external and internal thread testing, gear tooth measurement, milling cutter, tab, rotation precision measurement, and optical transducer.

  13. Evaluation of precision and accuracy of neutron activation analysis method of environmental samples analysis

    International Nuclear Information System (INIS)

    Wardani, Sri; Rina M, Th.; L, Dyah

    2000-01-01

    Evaluation of precision and accuracy of Neutron Activation Analysis (NAA) method used by P2TRR performed by analyzed the standard reference samples from the National Institute of Environmental Study of Japan (NIES-CRM No.10 (rice flour) and the National Bureau of USA (NBS-SRM 1573a (tomato leave) by NAA method. In analyze the environmental SRM No.10 by NAA method in qualitatively could identified multi elements of contents, namely: Br, Ca, Co, CI, Cs, Gd, I, K< La, Mg, Mn, Na, Pa, Sb, Sm, Sr, Ta, Th, and Zn (19 elements) for SRM 1573a; As, Br, Cr, CI, Ce, Co, Cs, Fe, Ga, Hg, K, Mn, Mg, Mo, Na, Ni, Pb, Rb, Sr, Se, Sc, Sb, Ti, and Zn, (25 elements) for CRM No.10a; Ag, As, Br, Cr, CI, Ce, Cd, Co, Cs, Eu, Fe, Ga, Hg, K, Mg, Mn, Mo, Na, Nb, Pb, Rb, Sb, Sc, Th, TI, and Zn, (26 elements) for CRM No. 10b; As, Br, Co, CI, Ce, Cd, Ga, Hg, K, Mn, Mg, Mo, Na, Nb, Pb, Rb, Sb, Se, TI, and Zn (20 elementary) for CRM No.10c. In the quantitatively analysis could determined only some element of sample contents, namely: As, Co, Cd, Mo, Mn, and Zn. From the result compared with NIES or NBS values attained with deviation of 3% ∼ 15%. Overall, the result shown that the method and facilities have a good capability, but the irradiation facility and the software of spectrometry gamma ray necessary to developing or seriously research perform

  14. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  15. Precise fabrication of X-band accelerating structure

    International Nuclear Information System (INIS)

    Higo, T.; Sakai, H.; Higashi, Y.; Koike, S.; Takatomi, T.

    1994-01-01

    An accelerating structure with a/λ=0.16 is being fabricated to study a precise fabrication method. A frequency control of each cell better than 10 -4 level is required to realize a detuned structure. The present machining level is nearly 1 MHz/11.4 GHz in relative frequency error, which just satisfies the above requirement. To keep this machining precision, the diffusion bonding technique is found preferable to join the cells. Various diffusion conditions were tried. The frequency change can be less than 1 MHz/11.4 GHz and it can be controlled well better than that. (author)

  16. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    Science.gov (United States)

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  17. Elevation data fitting and precision analysis of Google Earth in road survey

    Science.gov (United States)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously

  18. A comprehensive method for evaluating precision of transfer alignment on a moving base

    Science.gov (United States)

    Yin, Hongliang; Xu, Bo; Liu, Dezheng

    2017-09-01

    In this study, we propose the use of the Degree of Alignment (DOA) in engineering applications for evaluating the precision of and identifying the transfer alignment on a moving base. First, we derive the statistical formula on the basis of estimations. Next, we design a scheme for evaluating the transfer alignment on a moving base, for which the attitude error cannot be directly measured. Then, we build a mathematic estimation model and discuss Fixed Point Smoothing (FPS), Returns to Scale (RTS), Inverted Sequence Recursive Estimation (ISRE), and Kalman filter estimation methods, which can be used when evaluating alignment accuracy. Our theoretical calculations and simulated analyses show that the DOA reflects not only the alignment time and accuracy but also differences in the maneuver schemes, and is suitable for use as an integrated evaluation index. Furthermore, all four of these algorithms can be used to identify the transfer alignment and evaluate its accuracy. We recommend RTS in particular for engineering applications. Generalized DOAs should be calculated according to the tactical requirements.

  19. Precision luminosity measurement at LHCb with beam-gas imaging

    CERN Document Server

    Barschel, Colin

    The luminosity is the physical quantity which relates the cross-section to the production rate in collider experiments. The cross-section being the particle physics observable of interest, a precise determination of the luminosity is required. This work presents the absolute luminosity calibration results performed at the Large Hadron Collider beauty (LHCb) experiment at CERN using a novel method based on beam-gas interactions with data acquired at a center of mass energy $\\sqrt{s}=8$ TeV and $\\sqrt{s}=2.76$ TeV. Reconstructed beam-gas interaction vertices in LHCb are used to measure the beam profiles, thus making it possible to determine the beams overlap integral. An important element of this work was to install and use a neon gas injection system to increase the beam-gas interaction rate. The precision reached with the beam-gas imaging method relies on the two-dimensional beam shape determination developed in this work. For such precision, the interaction vertex resolution is an important ingredient. There...

  20. Precision medicine for cancer with next-generation functional diagnostics.

    Science.gov (United States)

    Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T

    2015-12-01

    Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.

  1. French Meteor Network for High Precision Orbits of Meteoroids

    Science.gov (United States)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  2. Precision half-life measurement of 11C: The most precise mirror transition F t value

    Science.gov (United States)

    Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.

    2018-03-01

    Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.

  3. Precise Point Positioning with Partial Ambiguity Fixing.

    Science.gov (United States)

    Li, Pan; Zhang, Xiaohong

    2015-06-10

    Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.

  4. A new method for precise determination of iron, zinc and cadmium stable isotope ratios in seawater by double-spike mass spectrometry.

    Science.gov (United States)

    Conway, Tim M; Rosenberg, Angela D; Adkins, Jess F; John, Seth G

    2013-09-02

    The study of Fe, Zn and Cd stable isotopes (δ(56)Fe, δ(66)Zn and δ(114)Cd) in seawater is a new field, which promises to elucidate the marine cycling of these bioactive trace metals. However, the analytical challenges posed by the low concentration of these metals in seawater has meant that previous studies have typically required large sample volumes, highly limiting data collection in the oceans. Here, we present the first simultaneous method for the determination of these three isotope systems in seawater, using Nobias PA-1 chelating resin to extract metals from seawater, purification by anion exchange chromatography, and analysis by double spike MC-ICPMS. This method is designed for use on only a single litre of seawater and has blanks of 0.3, 0.06 and <0.03 ng for Fe, Zn and Cd respectively, representing a 1-20 fold reduction in sample size and a 4-130 decrease in blank compared to previously reported methods. The procedure yields data with high precision for all three elements (typically 0.02-0.2‰; 1σ internal precision), allowing us to distinguish natural variability in the oceans, which spans 1-3‰ for all three isotope systems. Simultaneous extraction and purification of three metals makes this method ideal for high-resolution, large-scale endeavours such as the GEOTRACES program. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Approximate Methods for the Generation of Dark Matter Halo Catalogs in the Age of Precision Cosmology

    Directory of Open Access Journals (Sweden)

    Pierluigi Monaco

    2016-10-01

    Full Text Available Precision cosmology has recently triggered new attention on the topic of approximate methods for the clustering of matter on large scales, whose foundations date back to the period from the late 1960s to early 1990s. Indeed, although the prospect of reaching sub-percent accuracy in the measurement of clustering poses a challenge even to full N-body simulations, an accurate estimation of the covariance matrix of clustering statistics, not to mention the sampling of parameter space, requires usage of a large number (hundreds in the most favourable cases of simulated (mock galaxy catalogs. Combination of few N-body simulations with a large number of realizations performed with approximate methods gives the most promising approach to solve these problems with a reasonable amount of resources. In this paper I review this topic, starting from the foundations of the methods, then going through the pioneering efforts of the 1990s, and finally presenting the latest extensions and a few codes that are now being used in present-generation surveys and thoroughly tested to assess their performance in the context of future surveys.

  6. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    Science.gov (United States)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  7. Precision tests of CPT invariance with single trapped antiprotons

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Stefan [RIKEN, Ulmer Initiative Research Unit, Wako, Saitama (Japan); Collaboration: BASE-Collaboration

    2015-07-01

    The reason for the striking imbalance of matter and antimatter in our Universe has yet to be understood. This is the motivation and inspiration to conduct high precision experiments comparing the fundamental properties of matter and antimatter equivalents at lowest energies and with greatest precision. According to theory, the most sensitive tests of CPT invariance are measurements of antihydrogen ground-state hyperfine splitting as well as comparisons of proton and antiproton magnetic moments. Within the BASE collaboration we target the latter. By using a double Penning trap we performed very recently the first direct high precision measurement of the proton magnetic moment. The achieved fractional precision of 3.3 ppb improves the currently accepted literature value by a factor of 2.5. Application of the method to a single trapped antiproton will improve precision of the particles magnetic moment by more than a factor of 1000, thus providing one of the most stringent tests of CPT invariance. In my talk I report on the status and future perspectives of our efforts.

  8. A modified time-of-flight method for precise determination of high speed ratios in molecular beams

    Energy Technology Data Exchange (ETDEWEB)

    Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.; Samelin, B.; Holst, B. [Department of Physics and Technology, University of Bergen, Allégaten 55, 5007 Bergen (Norway); Bracco, G. [Department of Physics and Technology, University of Bergen, Allégaten 55, 5007 Bergen (Norway); CNR-IMEM, Department of Physics, University of Genova, V. Dodecaneso 33, 16146 Genova (Italy)

    2016-02-15

    Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beam using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.

  9. An improved gravity compensation method for high-precision free-INS based on MEC–BP–AdaBoost

    International Nuclear Information System (INIS)

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Li, Jing

    2016-01-01

    In recent years, with the rapid improvement of inertial sensors (accelerometers and gyroscopes), gravity compensation has become more important for improving navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper proposes a mind evolutionary computation (MEC) back propagation (BP) AdaBoost algorithm neural-network-based gravity compensation method that estimates the gravity disturbance on the track based on measured gravity data. A MEC–BP–AdaBoost network-based gravity compensation algorithm used in the training process to establish the prediction model takes the carrier position (longitude and latitude) provided by INS as the input data and the gravity disturbance as the output data, and then compensates the obtained gravity disturbance into the INS’s error equations to restrain the position error propagation. The MEC–BP–AdaBoost algorithm can not only effectively avoid BP neural networks being trapped in local extrema, but also perfectly solve the nonlinearity between the input and output data that cannot be solved by traditional interpolation methods, such as least-square collocation (LSC) interpolation. The accuracy and feasibility of the proposed interpolation method are verified through numerical tests. A comparison of several other compensation methods applied in field experiments, including LSC interpolation and traditional BP interpolation, highlights the superior performance of the proposed method. The field experiment results show that the maximum value of the position error can reduce by 28% with the proposed gravity compensation method. (paper)

  10. An Empirical Study of Precise Interprocedural Array Analysis

    Directory of Open Access Journals (Sweden)

    Michael Hind

    1994-01-01

    Full Text Available In this article we examine the role played by the interprocedural analysis of array accesses in the automatic parallelization of Fortran programs. We use the PTRAN system to provide measurements of several benchmarks to compare different methods of representing interprocedurally accessed arrays. We examine issues concerning the effectiveness of automatic parallelization using these methods and the efficiency of a precise summarization method.

  11. Design and control of the precise tracking bed based on complex electromechanical design theory

    Science.gov (United States)

    Ren, Changzhi; Liu, Zhao; Wu, Liao; Chen, Ken

    2010-05-01

    The precise tracking technology is wide used in astronomical instruments, satellite tracking and aeronautic test bed. However, the precise ultra low speed tracking drive system is one high integrated electromechanical system, which one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The precise Tracking Bed is one ultra-exact, ultra-low speed, high precision and huge inertial instrument, which some kind of mechanism and environment of the ultra low speed is different from general technology. This paper explores the design process based on complex electromechanical optimizing design theory, one non-PID with a CMAC forward feedback control method is used in the servo system of the precise tracking bed and some simulation results are discussed.

  12. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  13. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze; Shi, Lihui; Wang, Haoxiang; Meng, Jiandong; Wang, Jim Jing-Yan; Sun, Qingquan; Gu, Yi

    2017-01-01

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  14. Validation of a multi-residue method for the determination of several antibiotic groups in honey by LC-MS/MS.

    Science.gov (United States)

    Bohm, Detlef A; Stachel, Carolin S; Gowik, Petra

    2012-07-01

    The presented multi-method was developed for the confirmation of 37 antibiotic substances from the six antibiotic groups: macrolides, lincosamides, quinolones, tetracyclines, pleuromutilines and diamino-pyrimidine derivatives. All substances were analysed simultaneously in a single analytical run with the same procedure, including an extraction with buffer, a clean-up by solid-phase extraction, and the measurement by liquid chromatography tandem mass spectrometry in ESI+ mode. The method was validated on the basis of an in-house validation concept with factorial design by combination of seven factors to check the robustness in a concentration range of 5-50 μg kg(-1). The honeys used were of different types with regard to colour and origin. The values calculated for the validation parameters-decision limit CCα (range, 7.5-12.9 μg kg(-1)), detection capability CCβ (range, 9.4-19.9 μg kg(-1)), within-laboratory reproducibility RSD(wR) (tylvalosin with 21.4 %), repeatability RSD(r) (tylvalosin with 21.1%), and recovery (range, 92-106%)-were acceptable and in agreement with the criteria of Commission Decision 2002/657/EC. The validation results showed that the method was applicable for the residue analysis of antibiotics in honey to substances with and without recommended concentrations, although some changes had been tested during validation to determine the robustness of the method.

  15. Study of the precision and trueness of the Brazilian method for ethanol and gasoline determination; Estudo da precisao e exatidao do metodo brasileiro para determinacao de etanol e gasolina

    Energy Technology Data Exchange (ETDEWEB)

    Zucchini, Ricardo R.; Hinata, Patricia [Instituto de Pesquisas Tecnologicas (IPT), Sao Paulo, SP (Brazil); Gioseffi, Carla S.; Franco, Joao B.S. [Instituto Brasileiro de Petroleo, Gas e Biocombustiveis (IBP), Rio de Janeiro, RJ (Brazil); Nascimento, Cristina R.; Torres, Eduardo S. [Agencia Nacional do Petroleo, Gas Natural e Biocombustiveis (ANP), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    The determination of repeatability and reproducibility standard deviations of an analytical method, s{sub r} and s{sub R}, obtained by Interlaboratory program, makes it possible to calculate many kinds of precision limits of the method, which are needed in every laboratory's routine result comparisons and also in between-laboratories comparisons. This paper presents the results of the first interlaboratory trial, accomplished in the Brazilian petroleum sector, performed to define the trueness and precision of the Brazilian standard method for the determination of fuel anhydrous ethylic alcohol content in gasoline, that was performed by 34 experienced laboratories. The r and R values were 0,7 and 2,3 and main factors that would improve and optimize the method are presented. (author)

  16. Artificial intelligence, physiological genomics, and precision medicine.

    Science.gov (United States)

    Williams, Anna Marie; Liu, Yong; Regner, Kevin R; Jotterand, Fabrice; Liu, Pengyuan; Liang, Mingyu

    2018-04-01

    Big data are a major driver in the development of precision medicine. Efficient analysis methods are needed to transform big data into clinically-actionable knowledge. To accomplish this, many researchers are turning toward machine learning (ML), an approach of artificial intelligence (AI) that utilizes modern algorithms to give computers the ability to learn. Much of the effort to advance ML for precision medicine has been focused on the development and implementation of algorithms and the generation of ever larger quantities of genomic sequence data and electronic health records. However, relevance and accuracy of the data are as important as quantity of data in the advancement of ML for precision medicine. For common diseases, physiological genomic readouts in disease-applicable tissues may be an effective surrogate to measure the effect of genetic and environmental factors and their interactions that underlie disease development and progression. Disease-applicable tissue may be difficult to obtain, but there are important exceptions such as kidney needle biopsy specimens. As AI continues to advance, new analytical approaches, including those that go beyond data correlation, need to be developed and ethical issues of AI need to be addressed. Physiological genomic readouts in disease-relevant tissues, combined with advanced AI, can be a powerful approach for precision medicine for common diseases.

  17. Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis

    Science.gov (United States)

    Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro

    2017-04-01

    The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup

  18. Distributed Control Architectures for Precision Spacecraft Formations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — LaunchPoint Technologies, Inc. (LaunchPoint) proposes to develop synthesis methods and design architectures for distributed control systems in precision spacecraft...

  19. Precise shape reconstruction by active pattern in total-internal-reflection-based tactile sensor.

    Science.gov (United States)

    Saga, Satoshi; Taira, Ryosuke; Deguchi, Koichiro

    2014-03-01

    We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.

  20. Precision grinding of microarray lens molding die with 4-axes controlled microwheel

    Directory of Open Access Journals (Sweden)

    Yuji Yamamoto, Hirofumi Suzuki, Takashi Onishi1, Tadashi Okino and Toshimichi Moriwaki

    2007-01-01

    Full Text Available This paper deals with precision grinding of microarray lens (fly eye molding die by using a resinoid bonded diamond wheel. An ultra-precision grinding system of microarray lens molding die and new truing method of resinoid bonded diamond wheel were developed. In this system, a grinding wheel was four-dimensionally controlled with 1 nm resolution by linear scale feedback system and scanned on the workpiece surface. New truing method by using a vanadium alloy tool was developed and its performance was obtained with high preciseness and low wheel wear. Finally, the microarray lens molding dies of fine grain tungsten carbide (WC was tested with the resinoid bonded diamond wheel to evaluate grinding performance.

  1. Fiber Scrambling for High Precision Spectrographs

    Science.gov (United States)

    Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.

    2011-05-01

    The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.

  2. [Precision and personalized medicine].

    Science.gov (United States)

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  3. Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision

    Science.gov (United States)

    Yang, Bingwei; Xie, Xinhao; Li, Duan

    2018-01-01

    Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639

  4. Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision

    Directory of Open Access Journals (Sweden)

    Xiaolu Li

    2018-04-01

    Full Text Available Time of flight (TOF based light detection and ranging (LiDAR is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC that counts time between trigger signals and analog-to-digital converter (ADC that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR with analog discrete return system based ranging (AR, a peak detection method (WR-PK shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC, WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision.

  5. Towards an Open Software Platform for Field Robots in Precision Agriculture

    Directory of Open Access Journals (Sweden)

    Kjeld Jensen

    2014-06-01

    Full Text Available Robotics in precision agriculture has the potential to improve competitiveness and increase sustainability compared to current crop production methods and has become an increasingly active area of research. Tractor guidance systems for supervised navigation and implement control have reached the market, and prototypes of field robots performing precision agriculture tasks without human intervention also exist. But research in advanced cognitive perception and behaviour that is required to enable a more efficient, reliable and safe autonomy becomes increasingly demanding due to the growing software complexity. A lack of collaboration between research groups contributes to the problem. Scientific publications describe methods and results from the work, but little field robot software is released and documented for others to use. We hypothesize that a common open software platform tailored to field robots in precision agriculture will significantly decrease development time and resources required to perform experiments due to efficient reuse of existing work across projects and robot platforms. In this work we present the FroboMind software platform and evaluate the performance when applied to precision agriculture tasks.

  6. The Lanczos and Conjugate Gradient Algorithms in Finite Precision Arithmetic

    Czech Academy of Sciences Publication Activity Database

    Meurant, G.; Strakoš, Zdeněk

    2006-01-01

    Roč. 15, - (2006), s. 471-542 ISSN 0962-4929 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : Lanczos method * conjugate gradient method * finite precision arithmetic * numerical stability * iterative methods Subject RIV: BA - General Mathematics

  7. The newest precision measurement

    International Nuclear Information System (INIS)

    Lee, Jing Gu; Lee, Jong Dae

    1974-05-01

    This book introduces basic of precision measurement, measurement of length, limit gauge, measurement of angles, measurement of surface roughness, measurement of shapes and locations, measurement of outline, measurement of external and internal thread, gear testing, accuracy inspection of machine tools, three dimension coordinate measuring machine, digitalisation of precision measurement, automation of precision measurement, measurement of cutting tools, measurement using laser, and point of choosing length measuring instrument.

  8. Enabling Precision Cardiology Through Multiscale Biology and Systems Medicine

    Directory of Open Access Journals (Sweden)

    Kipp W. Johnson, BS

    2017-06-01

    Full Text Available Summary: The traditional paradigm of cardiovascular disease research derives insight from large-scale, broadly inclusive clinical studies of well-characterized pathologies. These insights are then put into practice according to standardized clinical guidelines. However, stagnation in the development of new cardiovascular therapies and variability in therapeutic response implies that this paradigm is insufficient for reducing the cardiovascular disease burden. In this state-of-the-art review, we examine 3 interconnected ideas we put forth as key concepts for enabling a transition to precision cardiology: 1 precision characterization of cardiovascular disease with machine learning methods; 2 the application of network models of disease to embrace disease complexity; and 3 using insights from the previous 2 ideas to enable pharmacology and polypharmacology systems for more precise drug-to-patient matching and patient-disease stratification. We conclude by exploring the challenges of applying a precision approach to cardiology, which arise from a deficit of the required resources and infrastructure, and emerging evidence for the clinical effectiveness of this nascent approach. Key Words: cardiology, clinical informatics, multi-omics, precision medicine, translational bioinformatics

  9. Fast and sensitive detection of indels induced by precise gene targeting

    DEFF Research Database (Denmark)

    Yang, Zhang; Steentoft, Catharina; Hauge, Camilla

    2015-01-01

    The nuclease-based gene editing tools are rapidly transforming capabilities for altering the genome of cells and organisms with great precision and in high throughput studies. A major limitation in application of precise gene editing lies in lack of sensitive and fast methods to detect...... and characterize the induced DNA changes. Precise gene editing induces double-stranded DNA breaks that are repaired by error-prone non-homologous end joining leading to introduction of insertions and deletions (indels) at the target site. These indels are often small and difficult and laborious to detect...

  10. Study of the nanoporous CHAP photoluminiscence for developing the precise methods of early caries detection

    Science.gov (United States)

    Goloshchapov, D.; Seredin, P.; Minakov, D.; Domashevskaya, E.

    2018-02-01

    This paper deals with the luminescence characteristics of an analogue of the mineral component of dental enamel of the nanocrystalline B-type carbonate-substituted hydroxyapatite (CHAP) with 3D defects (i.e. nanopores of ∼2-5 nm) on the nanocrystalline surface. The laser-induced luminescence of the synthesized CHAP samples was in the range of ∼515 nm (∼2.4 eV) and is due to CO3 groups replacing the PO4 group. It was found that the intensity of the luminescence of the CHAP is caused by structurally incorporated CO3 groups in the HAP structure. Furthermore, the intensity of the luminescence also decreases as the number of the above intracentre defects (CO3) in the apatite structure declines. These results are potentially promising for developing the foundations for precise methods for the early detection of caries in human solid dental tissue.

  11. Applicability of the DPPH assay for evaluating the antioxidant capacity of food additives - inter-laboratory evaluation study -.

    Science.gov (United States)

    Shimamura, Tomoko; Sumikura, Yoshihiro; Yamazaki, Takeshi; Tada, Atsuko; Kashiwagi, Takehiro; Ishikawa, Hiroya; Matsui, Toshiro; Sugimoto, Naoki; Akiyama, Hiroshi; Ukeda, Hiroyuki

    2014-01-01

    An inter-laboratory evaluation study was conducted in order to evaluate the antioxidant capacity of food additives by using a 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay. Four antioxidants used as existing food additives (i.e., tea extract, grape seed extract, enju extract, and d-α-tocopherol) and 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (Trolox) were used as analytical samples, and 14 laboratories participated in this study. The repeatability relative standard deviation (RSD(r)) of the IC50 of Trolox, four antioxidants, and the Trolox equivalent antioxidant capacity (TEAC) were 1.8-2.2%, 2.2-2.9%, and 2.1-2.5%, respectively. Thus, the proposed DPPH assay showed good performance within the same laboratory. The reproducibility relative standard deviation (RSD(R)) of IC50 of Trolox, four antioxidants, and TEAC were 4.0-7.9%, 6.0-11%, and 3.7-9.3%, respectively. The RSD(R)/RSD(r) values of TEAC were lower than, or nearly equal to, those of IC50 of the four antioxidants, suggesting that the use of TEAC was effective for reducing the variance among the laboratories. These results showed that the proposed DPPH assay could be used as a standard method to evaluate the antioxidant capacity of food additives.

  12. Determination of campesterol, stigmasterol, and beta-sitosterol in saw palmetto raw materials and dietary supplements by gas chromatography: collaborative study.

    Science.gov (United States)

    Sorenson, Wendy R; Sullivan, Darryl

    2007-01-01

    An interlaboratory study was conducted to evaluate a method for the determination of campesterol, stigmasterol, and beta-sitosterol in saw palmetto raw materials and dietary supplements at levels >1.00 mg/100 g based on a 2-3 g sample. Test samples were saponified at high temperature with ethanolic KOH solution. The unsaponifiable fraction containing phytosterols (campesterol, stigmasterol, and beta-sitosterol) was extracted with toluene. Phytosterols were derivatized to trimethylsilyl ethers and then quantified by gas chromatography with hydrogen flame ionization detection. Twelve blind duplicates, one of which was fortified, were successfully analyzed by 10 collaborators. Recoveries were obtained for the sample that was fortified. The results were 99.8, 111, and 111% for campesterol, stigmasterol, and beta-sitosterol, respectively. For repeatability, the relative standard deviation (RSDr) ranged from 3.93 to 17.3% for campesterol, 3.56 to 22.7% for stigmasterol, and 3.70 to 43.9% for beta-sitosterol. For reproducibility, the RSDR ranged from 7.97 to 22.6%, 0 to 26.7%, and 5.27 to 43.9% for campesterol, stigmasterol, and beta-sitosterol, respectively. Overall, the Study Director approved 5 materials with acceptable HorRat values for campesterol, stigmasterol, and beta-sitosterol ranging from 1.02 to 2.16.

  13. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    Science.gov (United States)

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  14. Agricultural experts’ attitude towards precision agriculture: Evidence from Guilan Agricultural Organization, Northern Iran

    OpenAIRE

    Mohammad Sadegh Allahyari; Masoumeh Mohammadzadeh; Stefanos A. Nastis

    2016-01-01

    Identifying factors that influence the attitudes of agricultural experts regarding precision agriculture plays an important role in developing, promoting and establishing precision agriculture. The aim of this study was to identify factors affecting the attitudes of agricultural experts regarding the implementation of precision agriculture. A descriptive research design was employed as the research method. A research-made questionnaire was used to examine the agricultural experts’ attitude to...

  15. A continuous flow isotope ratio mass spectrometry method for high precision determination of dissolved gas ratios and isotopic composition

    DEFF Research Database (Denmark)

    Charoenpong, C. N.; Bristow, L. A.; Altabet, M. A.

    2014-01-01

    ratio mass spectrometer (IRMS). A continuous flow of He carrier gas completely degasses the sample, and passes through the preparation and purification system before entering the IRMS for analysis. The use of this continuous He carrier permits short analysis times (less than 8 min per sample......) as compared with current high-precision methods. In addition to reference gases, calibration is achieved using air-equilibrated water standards of known temperature and salinity. Assessment of reference gas injections, air equilibrated standards, as well as samples collected in the field shows the accuracy...

  16. High Precision Edge Detection Algorithm for Mechanical Parts

    Science.gov (United States)

    Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui

    2018-04-01

    High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  17. Precision medicine: opportunities, possibilities, and challenges for patients and providers.

    Science.gov (United States)

    Adams, Samantha A; Petersen, Carolyn

    2016-07-01

    Precision medicine approaches disease treatment and prevention by taking patients' individual variability in genes, environment, and lifestyle into account. Although the ideas underlying precision medicine are not new, opportunities for its more widespread use in practice have been enhanced by the development of large-scale databases, new methods for categorizing and representing patients, and computational tools for analyzing large datasets. New research methods may create uncertainty for both healthcare professionals and patients. In such situations, frameworks that address ethical, legal, and social challenges can be instrumental for facilitating trust between patients and providers, but must protect patients while not stifling progress or overburdening healthcare professionals. In this perspective, we outline several ethical, legal, and social issues related to the Precision Medicine Initiative's proposed changes to current institutions, values, and frameworks. This piece is not an exhaustive overview, but is intended to highlight areas meriting further study and action, so that precision medicine's goal of facilitating systematic learning and research at the point of care does not overshadow healthcare's goal of providing care to patients. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Quantification of LSD in illicit samples by high performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Pablo Alves Marinho

    2010-12-01

    Full Text Available In the present study, a method using high performance liquid chromatography to quantify LSD, in blotter papers seized in Minas Gerais, was optimized and validated. Linearity, precision, recovery, limits of detection and quantification, and selectivity were the parameters used to evaluate performance. The samples were extracted with methanol:water (1: 1 in an ultra-sound bath. The linearity between 0.05 and 20.00 μg/mL (0.5 and 200.0μg of LSD/blotter was observed with satisfactory mean intra and inter assay precision (RSDr = 4.4% and RSD R = 6.4%, respectively and with mean recoveries of 83.4% and 84.9% to the levels of 1.00 and 20.00 μg/mL (10 and 200μg LSD/blotter. The limits of detection and quantification were 0.01 and 0.05 μg/mL, respectively (0.1 and 0.5 μg of LSD/blotter. The samples of blotters (n =22 were analyzed and the mean value of 67.55 μg of LSD/blotter (RSD=27.5% was found. Thus, the method used showed satisfactory analytical performance, and proved suitable as an analytical tool for LSD determination in illicit samples seized by police forces.No presente trabalho, um método utilizando cromatografia líquida de alta eficiência foi otimizado e validado para quantificar o LSD em selos apreendidos em Minas Gerais. A linearidade, precisão, recuperação, limites de detecção e quantificação e seletividade foram os parâmetros de desempenho avaliados. As amostras foram extraídas com metanol: água (1:1 em banho de ultra-som. A linearidade entre 0,05 a 20,00 mg/mL (0,5 a 200 μg LSD/blotter foi observada com precisão média, intra e inter ensaio, satisfatória (RSDr = 4,4% e RSD R = 6,4%, respectivamente e com recuperações médias de 83,4% e 84,9% para os níveis de LSD de 1,00 e 20,00 mg/mL (10 e 200 μg LSD/selo. Os limites de detecção e quantificação encontrados foram de 0,01 e 0,05 mg/mL, respectivamente (0,1 e 0,5 μg LSD/selo. As amostras de selos (n = 22 foram analisadas e o valor médio encontrado foi de 67

  19. Precision Medicine in Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2017-02-01

    Full Text Available Since President Obama announced the Precision Medicine Initiative in the United States, more and more attention has been paid to precision medicine. However, clinicians have already used it to treat conditions such as cancer. Many cardiovascular diseases have a familial presentation, and genetic variants are associated with the prevention, diagnosis, and treatment of cardiovascular diseases, which are the basis for providing precise care to patients with cardiovascular diseases. Large-scale cohorts and multiomics are critical components of precision medicine. Here we summarize the application of precision medicine to cardiovascular diseases based on cohort and omic studies, and hope to elicit discussion about future health care.

  20. Budget impact and cost-effectiveness: can we afford precision medicine in oncology?

    Science.gov (United States)

    Doble, Brett

    2016-01-01

    Over the past decade there have been remarkable advancements in the understanding of the molecular underpinnings of malignancy. Methods of testing capable of elucidating patients' molecular profiles are now readily available and there is an increased desire to incorporate the information derived from such tests into treatment selection for cancer patients. This has led to more appropriate application of existing treatments as well as the development of a number of innovative and highly effective treatments or what is known collectively as precision medicine. The impact that precision medicine will have on health outcomes is uncertain, as are the costs it will incur. There is, therefore, a need to develop economic evidence and appropriate methods of evaluation to support its implementation to ensure the resources allocated to these approaches are affordable and offer value for money. The market for precision medicine in oncology continues to rapidly expand, placing an increased pressure on reimbursement decision-makers to consider the value and opportunity cost of funding such approaches to care. The benefits of molecular testing can be complex and difficult to evaluate given currently available economic methods, potentially causing a distorted appreciation of their value. Funding decisions of precision medicine will also have far-reaching implications, requiring the consideration of both patient and public perspectives in decision-making. Recommendations to improve the value proposition of precision medicine are, therefore, provided with the hopes of facilitating a better understanding of its impact on outcomes and the overall health budget.

  1. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  2. Contacting nanowires and nanotubes with atomic precision for electronic transport

    KAUST Repository

    Qin, Shengyong; Hellstrom, Sondra; Bao, Zhenan; Boyanov, Boyan; Li, An-Ping

    2012-01-01

    Making contacts to nanostructures with atomic precision is an important process in the bottom-up fabrication and characterization of electronic nanodevices. Existing contacting techniques use top-down lithography and chemical etching, but lack atomic precision and introduce the possibility of contamination. Here, we report that a field-induced emission process can be used to make local contacts onto individual nanowires and nanotubes with atomic spatial precision. The gold nano-islands are deposited onto nanostructures precisely by using a scanning tunneling microscope tip, which provides a clean and controllable method to ensure both electrically conductive and mechanically reliable contacts. To demonstrate the wide applicability of the technique, nano-contacts are fabricated on silicide atomic wires, carbon nanotubes, and copper nanowires. The electrical transport measurements are performed in situ by utilizing the nanocontacts to bridge the nanostructures to the transport probes. © 2012 American Institute of Physics.

  3. Precise positional measurement system in transcranial magnetic stimulation

    International Nuclear Information System (INIS)

    Inoue, Tomonori; Mishima, Yukuo; Hiwaki, Osamu

    2006-01-01

    Transcranial magnetic stimulation (TMS) is a method for noninvasive stimulation of cerebral cortex, and it has contributed to clinical and basic researches of brain function. In order to estimate the accurate stimulating points of the cortex in TMS, precise measurement of the subject's head and the stimulating coil is necessary. In this study, we have developed the positioning TMS system with a three-dimensional (3-D) digitizer and a multi-articular system. We proposed a method for the accurate measurement of a subject's head and cortex, in which the location data of the subject's face surface captured by a 3-D digitizer were superimposed on the magnetic resonance imaging (MRI) data of the subject's face surface. Using this system, the precise estimation of the stimulated sites of the cortex in TMS was achieved. The validity of the system was verified by the experiment on the TMS of the motor cortex. (author)

  4. Platinum clusters with precise numbers of atoms for preparative-scale catalysis.

    Science.gov (United States)

    Imaoka, Takane; Akanuma, Yuki; Haruta, Naoki; Tsuchiya, Shogo; Ishihara, Kentaro; Okayasu, Takeshi; Chun, Wang-Jae; Takahashi, Masaki; Yamamoto, Kimihisa

    2017-09-25

    Subnanometer noble metal clusters have enormous potential, mainly for catalytic applications. Because a difference of only one atom may cause significant changes in their reactivity, a preparation method with atomic-level precision is essential. Although such a precision with enough scalability has been achieved by gas-phase synthesis, large-scale preparation is still at the frontier, hampering practical applications. We now show the atom-precise and fully scalable synthesis of platinum clusters on a milligram scale from tiara-like platinum complexes with various ring numbers (n = 5-13). Low-temperature calcination of the complexes on a carbon support under hydrogen stream affords monodispersed platinum clusters, whose atomicity is equivalent to that of the precursor complex. One of the clusters (Pt 10 ) exhibits high catalytic activity in the hydrogenation of styrene compared to that of the other clusters. This method opens an avenue for the application of these clusters to preparative-scale catalysis.The catalytic activity of a noble metal nanocluster is tied to its atomicity. Here, the authors report an atom-precise, fully scalable synthesis of platinum clusters from molecular ring precursors, and show that a variation of only one atom can dramatically change a cluster's reactivity.

  5. Accurate and precise DNA quantification in the presence of different amplification efficiencies using an improved Cy0 method.

    Science.gov (United States)

    Guescini, Michele; Sisti, Davide; Rocchi, Marco B L; Panebianco, Renato; Tibollo, Pasquale; Stocchi, Vilberto

    2013-01-01

    Quantitative real-time PCR represents a highly sensitive and powerful technology for the quantification of DNA. Although real-time PCR is well accepted as the gold standard in nucleic acid quantification, there is a largely unexplored area of experimental conditions that limit the application of the Ct method. As an alternative, our research team has recently proposed the Cy0 method, which can compensate for small amplification variations among the samples being compared. However, when there is a marked decrease in amplification efficiency, the Cy0 is impaired, hence determining reaction efficiency is essential to achieve a reliable quantification. The proposed improvement in Cy0 is based on the use of the kinetic parameters calculated in the curve inflection point to compensate for efficiency variations. Three experimental models were used: inhibition of primer extension, non-optimal primer annealing and a very small biological sample. In all these models, the improved Cy0 method increased quantification accuracy up to about 500% without affecting precision. Furthermore, the stability of this procedure was enhanced integrating it with the SOD method. In short, the improved Cy0 method represents a simple yet powerful approach for reliable DNA quantification even in the presence of marked efficiency variations.

  6. Validation of an analytical method for simultaneous high-precision measurements of greenhouse gas emissions from wastewater treatment plants using a gas chromatography-barrier discharge detector system.

    Science.gov (United States)

    Pascale, Raffaella; Caivano, Marianna; Buchicchio, Alessandro; Mancini, Ignazio M; Bianco, Giuliana; Caniani, Donatella

    2017-01-13

    Wastewater treatment plants (WWTPs) emit CO 2 and N 2 O, which may lead to climate change and global warming. Over the last few years, awareness of greenhouse gas (GHG) emissions from WWTPs has increased. Moreover, the development of valid, reliable, and high-throughput analytical methods for simultaneous gas analysis is an essential requirement for environmental applications. In the present study, an analytical method based on a gas chromatograph (GC) equipped with a barrier ionization discharge (BID) detector was developed for the first time. This new method simultaneously analyses CO 2 and N 2 O and has a precision, measured in terms of relative standard of variation RSD%, equal to or less than 6.6% and 5.1%, respectively. The method's detection limits are 5.3ppm v for CO 2 and 62.0ppb v for N 2 O. The method's selectivity, linearity, accuracy, repeatability, intermediate precision, limit of detection and limit of quantification were good at trace concentration levels. After validation, the method was applied to a real case of N 2 O and CO 2 emissions from a WWTP, confirming its suitability as a standard procedure for simultaneous GHG analysis in environmental samples containing CO 2 levels less than 12,000mg/L. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Fast and Precise Beam Energy Measurement using Compton Backscattering at e+e- Colliders

    CERN Document Server

    Kaminskiy, V V; Muchnoi, N Yu; Zhilich, V N

    2017-01-01

    The report describes a method for a fast and precise beam energy measurement in the beam energy range 0.5-2 GeV and its application at various e+e- colliders. Low-energy laser photons interact head-on with the electron or positron beam and produce Compton backscattered photons whose energy is precisely measured by HPGe detector. The method allows measuring the beam energy with relative accuracy of ∼2-5.10-5. The method was successfully applied at VEPP-4M, VEPP-3, VEPP-2000 (BINP, Russia) and BEPC-II (IHEP, China).

  8. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    Science.gov (United States)

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder

  9. Precision evaluation of dual X-ray absorptiometry (iDXA) measurements

    International Nuclear Information System (INIS)

    Yu Wei; Lin Qiang; Yu Xiaobo; Yao Jinpeng

    2009-01-01

    Objective: To evaluate the precision of the iDXA measurements for lumbar spine, proximal femur and whole body bone density as well as body composition (lean and fat). Methods: The study recruited randomly 30 volunteers. Each subject was scanned by iDXA twice in the same day. Measurement sites included lumbar spine, proximal femur and whole body. Precision errors were expressed as root mean square of CV (RMS-CV). Results: Mean precision errors of bone density measurements at lumbar spine, femoral neck, Ward's triangle, great trochanter and total femur ranged from 0.8% to 2.0%, with the lowest of 0.8% at both lumbar spine and total femur, as well as with the highest of 2.0% at ward' s triangle; Mean precision errors of bone density measurements at whole body and its individual site ranged from 0.7% to 2.0%, with the lowest of 0.7% for the whole body measurement, mean precision errors of lean measurements at whole body and its individual site ranged from 0.6% to 2.1%, with the lowest of 0.6% for the whole body lean measurement; Mean precision errors of fat measurements at whole body and its individual site ranged from 1.0% to 3.2%, with the lowest of 1.0% for the whole body fat measurement. Conclusion: Measurement precision of iDXA at lumbar spine, proximal femur and whole body bone density could meet clinical needs; Precision values of the measurements of whole body and its individual composition may be helpful for future clinical use. (authors)

  10. Usefulness of Models in Precision Nutrient Management

    DEFF Research Database (Denmark)

    Plauborg, Finn; Manevski, Kiril; Zhenjiang, Zhou

    Modern agriculture increasingly applies new methods and technologies to increase production and nutrient use efficiencies and at the same time reduce leaching of nutrients and greenhouse gas emissions. GPS based ECa-measurement equipment, ER or EM instrumentations, are used to spatially character......Modern agriculture increasingly applies new methods and technologies to increase production and nutrient use efficiencies and at the same time reduce leaching of nutrients and greenhouse gas emissions. GPS based ECa-measurement equipment, ER or EM instrumentations, are used to spatially...... and mineral composition. Mapping of crop status and the spatial-temporal variability within fields with red-infrared reflection are used to support decision on split fertilisation and more precise dosing. The interpretation and use of these various data in precise nutrient management is not straightforward...... of mineralisation. However, whether the crop would benefit from this depended to a large extent on soil hydraulic conductivity within the range of natural variation when testing the model. In addition the initialisation of the distribution of soil total carbon and nitrogen into conceptual model compartments...

  11. Precision Airdrop (Largage de precision)

    Science.gov (United States)

    2005-12-01

    NAVIGATION TO A PRECISION AIRDROP OVERVIEW RTO-AG-300-V24 2 - 9 the point from various compass headings. As the tests are conducted, the resultant...rate. This approach avoids including a magnetic compass for the heading reference, which has difficulties due to local changes in the magnetic field...Scientifica della Difesa ROYAUME-UNI Via XX Settembre 123 Dstl Knowledge Services ESPAGNE 00187 Roma Information Centre, Building 247 SDG TECEN / DGAM

  12. Loss-induced limits to phase measurement precision with maximally entangled states

    International Nuclear Information System (INIS)

    Rubin, Mark A.; Kaushik, Sumanth

    2007-01-01

    The presence of loss limits the precision of an approach to phase measurement using maximally entangled states, also referred to as NOON states. A calculation using a simple beam-splitter model of loss shows that, for all nonzero values L of the loss, phase measurement precision degrades with increasing number N of entangled photons for N sufficiently large. For L above a critical value of approximately 0.785, phase measurement precision degrades with increasing N for all values of N. For L near zero, phase measurement precision improves with increasing N down to a limiting precision of approximately 1.018L radians, attained at N approximately equal to 2.218/L, and degrades as N increases beyond this value. Phase measurement precision with multiple measurements and a fixed total number of photons N T is also examined. For L above a critical value of approximately 0.586, the ratio of phase measurement precision attainable with NOON states to that attainable by conventional methods using unentangled coherent states degrades with increasing N, the number of entangled photons employed in a single measurement, for all values of N. For L near zero this ratio is optimized by using approximately N=1.279/L entangled photons in each measurement, yielding a precision of approximately 1.340√(L/N T ) radians

  13. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura

    BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  14. Precision of INR measured with a patient operated whole blood coagulometer

    DEFF Research Database (Denmark)

    Attermann, Jørn; Andersen, Niels Trolle; Korsgaard, Helle

    2003-01-01

    INTRODUCTION: The objective of the present study was to evaluate the precision of a portable whole blood coagulometer (CoaguChek S) in the hands of self-managing patients on oral anticoagulant therapy (OAT). MATERIALS AND METHODS: Fifteen patients on self-managed OAT performed measurements of INR...... and between patients was 15.0% and 14.7%, respectively. CONCLUSION: The precision of CoaguChek S is satisfactory....

  15. Precision medicine for nurses: 101.

    Science.gov (United States)

    Lemoine, Colleen

    2014-05-01

    To introduce the key concepts and terms associated with precision medicine and support understanding of future developments in the field by providing an overview and history of precision medicine, related ethical considerations, and nursing implications. Current nursing, medical and basic science literature. Rapid progress in understanding the oncogenic drivers associated with cancer is leading to a shift toward precision medicine, where treatment is based on targeting specific genetic and epigenetic alterations associated with a particular cancer. Nurses will need to embrace the paradigm shift to precision medicine, expend the effort necessary to learn the essential terminology, concepts and principles, and work collaboratively with physician colleagues to best position our patients to maximize the potential that precision medicine can offer. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Precision luminosity measurements at LHCb

    CERN Document Server

    Aaij, Roel; Adinolfi, Marco; Affolder, Anthony; Ajaltouni, Ziad; Akar, Simon; Albrecht, Johannes; Alessio, Federico; Alexander, Michael; Ali, Suvayu; Alkhazov, Georgy; Alvarez Cartelle, Paula; Alves Jr, Antonio Augusto; Amato, Sandra; Amerio, Silvia; Amhis, Yasmine; An, Liupan; Anderlini, Lucio; Anderson, Jonathan; Andreassen, Rolf; Andreotti, Mirco; Andrews, Jason; Appleby, Robert; Aquines Gutierrez, Osvaldo; Archilli, Flavio; Artamonov, Alexander; Artuso, Marina; Aslanides, Elie; Auriemma, Giulio; Baalouch, Marouen; Bachmann, Sebastian; Back, John; Badalov, Alexey; Baesso, Clarissa; Baldini, Wander; Barlow, Roger; Barschel, Colin; Barsuk, Sergey; Barter, William; Batozskaya, Varvara; Battista, Vincenzo; Bay, Aurelio; Beaucourt, Leo; Beddow, John; Bedeschi, Franco; Bediaga, Ignacio; Belogurov, Sergey; Belous, Konstantin; Belyaev, Ivan; Ben-Haim, Eli; Bencivenni, Giovanni; Benson, Sean; Benton, Jack; Berezhnoy, Alexander; Bernet, Roland; Bettler, Marc-Olivier; van Beuzekom, Martinus; Bien, Alexander; Bifani, Simone; Bird, Thomas; Bizzeti, Andrea; Bjørnstad, Pål Marius; Blake, Thomas; Blanc, Frédéric; Blouw, Johan; Blusk, Steven; Bocci, Valerio; Bondar, Alexander; Bondar, Nikolay; Bonivento, Walter; Borghi, Silvia; Borgia, Alessandra; Borsato, Martino; Bowcock, Themistocles; Bowen, Espen Eie; Bozzi, Concezio; Brambach, Tobias; Bressieux, Joël; Brett, David; Britsch, Markward; Britton, Thomas; Brodzicka, Jolanta; Brook, Nicholas; Brown, Henry; Bursche, Albert; Buytaert, Jan; Cadeddu, Sandro; Calabrese, Roberto; Calvi, Marta; Calvo Gomez, Miriam; Campana, Pierluigi; Campora Perez, Daniel; Carbone, Angelo; Carboni, Giovanni; Cardinale, Roberta; Cardini, Alessandro; Carson, Laurence; Carvalho Akiba, Kazuyoshi; Casse, Gianluigi; Cassina, Lorenzo; Castillo Garcia, Lucia; Cattaneo, Marco; Cauet, Christophe; Cenci, Riccardo; Charles, Matthew; Charpentier, Philippe; Chefdeville, Maximilien; Chen, Shanzhen; Cheung, Shu-Faye; Chiapolini, Nicola; Chrzaszcz, Marcin; Ciba, Krzystof; Cid Vidal, Xabier; Ciezarek, Gregory; Clarke, Peter; Clemencic, Marco; Cliff, Harry; Closier, Joel; Coco, Victor; Cogan, Julien; Cogneras, Eric; Cojocariu, Lucian; Collazuol, Gianmaria; Collins, Paula; Comerma-Montells, Albert; Contu, Andrea; Cook, Andrew; Coombes, Matthew; Coquereau, Samuel; Corti, Gloria; Corvo, Marco; Counts, Ian; Couturier, Benjamin; Cowan, Greig; Craik, Daniel Charles; Cruz Torres, Melissa Maria; Cunliffe, Samuel; Currie, Robert; D'Ambrosio, Carmelo; Dalseno, Jeremy; David, Pascal; David, Pieter; Davis, Adam; De Bruyn, Kristof; De Capua, Stefano; De Cian, Michel; De Miranda, Jussara; De Paula, Leandro; De Silva, Weeraddana; De Simone, Patrizia; Dean, Cameron Thomas; Decamp, Daniel; Deckenhoff, Mirko; Del Buono, Luigi; Déléage, Nicolas; Derkach, Denis; Deschamps, Olivier; Dettori, Francesco; Di Canto, Angelo; Dijkstra, Hans; Donleavy, Stephanie; Dordei, Francesca; Dorigo, Mirco; Dosil Suárez, Alvaro; Dossett, David; Dovbnya, Anatoliy; Dreimanis, Karlis; Dujany, Giulio; Dupertuis, Frederic; Durante, Paolo; Dzhelyadin, Rustem; Dziurda, Agnieszka; Dzyuba, Alexey; Easo, Sajan; Egede, Ulrik; Egorychev, Victor; Eidelman, Semen; Eisenhardt, Stephan; Eitschberger, Ulrich; Ekelhof, Robert; Eklund, Lars; El Rifai, Ibrahim; Elsasser, Christian; Ely, Scott; Esen, Sevda; Evans, Hannah Mary; Evans, Timothy; Falabella, Antonio; Färber, Christian; Farinelli, Chiara; Farley, Nathanael; Farry, Stephen; Fay, Robert; Ferguson, Dianne; Fernandez Albor, Victor; Ferreira Rodrigues, Fernando; Ferro-Luzzi, Massimiliano; Filippov, Sergey; Fiore, Marco; Fiorini, Massimiliano; Firlej, Miroslaw; Fitzpatrick, Conor; Fiutowski, Tomasz; Fol, Philip; Fontana, Marianna; Fontanelli, Flavio; Forty, Roger; Francisco, Oscar; Frank, Markus; Frei, Christoph; Frosini, Maddalena; Fu, Jinlin; Furfaro, Emiliano; Gallas Torreira, Abraham; Galli, Domenico; Gallorini, Stefano; Gambetta, Silvia; Gandelman, Miriam; Gandini, Paolo; Gao, Yuanning; García Pardiñas, Julián; Garofoli, Justin; Garra Tico, Jordi; Garrido, Lluis; Gascon, David; Gaspar, Clara; Gauld, Rhorry; Gavardi, Laura; Geraci, Angelo; Gersabeck, Evelina; Gersabeck, Marco; Gershon, Timothy; Ghez, Philippe; Gianelle, Alessio; Gianì, Sebastiana; Gibson, Valerie; Giubega, Lavinia-Helena; Gligorov, V.V.; Göbel, Carla; Golubkov, Dmitry; Golutvin, Andrey; Gomes, Alvaro; Gotti, Claudio; Grabalosa Gándara, Marc; Graciani Diaz, Ricardo; Granado Cardoso, Luis Alberto; Graugés, Eugeni; Graziani, Giacomo; Grecu, Alexandru; Greening, Edward; Gregson, Sam; Griffith, Peter; Grillo, Lucia; Grünberg, Oliver; Gui, Bin; Gushchin, Evgeny; Guz, Yury; Gys, Thierry; Hadjivasiliou, Christos; Haefeli, Guido; Haen, Christophe; Haines, Susan; Hall, Samuel; Hamilton, Brian; Hampson, Thomas; Han, Xiaoxue; Hansmann-Menzemer, Stephanie; Harnew, Neville; Harnew, Samuel; Harrison, Jonathan; He, Jibo; Head, Timothy; Heijne, Veerle; Hennessy, Karol; Henrard, Pierre; Henry, Louis; Hernando Morata, Jose Angel; van Herwijnen, Eric; Heß, Miriam; Hicheur, Adlène; Hill, Donal; Hoballah, Mostafa; Hombach, Christoph; Hulsbergen, Wouter; Hunt, Philip; Hussain, Nazim; Hutchcroft, David; Hynds, Daniel; Idzik, Marek; Ilten, Philip; Jacobsson, Richard; Jaeger, Andreas; Jalocha, Pawel; Jans, Eddy; Jaton, Pierre; Jawahery, Abolhassan; Jing, Fanfan; John, Malcolm; Johnson, Daniel; Jones, Christopher; Joram, Christian; Jost, Beat; Jurik, Nathan; Kandybei, Sergii; Kanso, Walaa; Karacson, Matthias; Karbach, Moritz; Karodia, Sarah; Kelsey, Matthew; Kenyon, Ian; Ketel, Tjeerd; Khanji, Basem; Khurewathanakul, Chitsanu; Klaver, Suzanne; Klimaszewski, Konrad; Kochebina, Olga; Kolpin, Michael; Komarov, Ilya; Koopman, Rose; Koppenburg, Patrick; Korolev, Mikhail; Kozlinskiy, Alexandr; Kravchuk, Leonid; Kreplin, Katharina; Kreps, Michal; Krocker, Georg; Krokovny, Pavel; Kruse, Florian; Kucewicz, Wojciech; Kucharczyk, Marcin; Kudryavtsev, Vasily; Kurek, Krzysztof; Kvaratskheliya, Tengiz; La Thi, Viet Nga; Lacarrere, Daniel; Lafferty, George; Lai, Adriano; Lambert, Dean; Lambert, Robert W; Lanfranchi, Gaia; Langenbruch, Christoph; Langhans, Benedikt; Latham, Thomas; Lazzeroni, Cristina; Le Gac, Renaud; van Leerdam, Jeroen; Lees, Jean-Pierre; Lefèvre, Regis; Leflat, Alexander; Lefrançois, Jacques; Leo, Sabato; Leroy, Olivier; Lesiak, Tadeusz; Leverington, Blake; Li, Yiming; Likhomanenko, Tatiana; Liles, Myfanwy; Lindner, Rolf; Linn, Christian; Lionetto, Federica; Liu, Bo; Lohn, Stefan; Longstaff, Iain; Lopes, Jose; Lopez-March, Neus; Lowdon, Peter; Lu, Haiting; Lucchesi, Donatella; Luo, Haofei; Lupato, Anna; Luppi, Eleonora; Lupton, Oliver; Machefert, Frederic; Machikhiliyan, Irina V; Maciuc, Florin; Maev, Oleg; Malde, Sneha; Malinin, Alexander; Manca, Giulia; Mancinelli, Giampiero; Mapelli, Alessandro; Maratas, Jan; Marchand, Jean François; Marconi, Umberto; Marin Benito, Carla; Marino, Pietro; Märki, Raphael; Marks, Jörg; Martellotti, Giuseppe; Martens, Aurelien; Martín Sánchez, Alexandra; Martinelli, Maurizio; Martinez Santos, Diego; Martinez Vidal, Fernando; Martins Tostes, Danielle; Massafferri, André; Matev, Rosen; Mathe, Zoltan; Matteuzzi, Clara; Maurin, Brice; Mazurov, Alexander; McCann, Michael; McCarthy, James; McNab, Andrew; McNulty, Ronan; McSkelly, Ben; Meadows, Brian; Meier, Frank; Meissner, Marco; Merk, Marcel; Milanes, Diego Alejandro; Minard, Marie-Noelle; Moggi, Niccolò; Molina Rodriguez, Josue; Monteil, Stephane; Morandin, Mauro; Morawski, Piotr; Mordà, Alessandro; Morello, Michael Joseph; Moron, Jakub; Morris, Adam Benjamin; Mountain, Raymond; Muheim, Franz; Müller, Katharina; Mussini, Manuel; Muster, Bastien; Naik, Paras; Nakada, Tatsuya; Nandakumar, Raja; Nasteva, Irina; Needham, Matthew; Neri, Nicola; Neubert, Sebastian; Neufeld, Niko; Neuner, Max; Nguyen, Anh Duc; Nguyen, Thi-Dung; Nguyen-Mau, Chung; Nicol, Michelle; Niess, Valentin; Niet, Ramon; Nikitin, Nikolay; Nikodem, Thomas; Novoselov, Alexey; O'Hanlon, Daniel Patrick; Oblakowska-Mucha, Agnieszka; Obraztsov, Vladimir; Oggero, Serena; Ogilvy, Stephen; Okhrimenko, Oleksandr; Oldeman, Rudolf; Onderwater, Gerco; Orlandea, Marius; Otalora Goicochea, Juan Martin; Owen, Patrick; Oyanguren, Maria Arantza; Pal, Bilas Kanti; Palano, Antimo; Palombo, Fernando; Palutan, Matteo; Panman, Jacob; Papanestis, Antonios; Pappagallo, Marco; Pappalardo, Luciano; Parkes, Christopher; Parkinson, Christopher John; Passaleva, Giovanni; Patel, Girish; Patel, Mitesh; Patrignani, Claudia; Pearce, Alex; Pellegrino, Antonio; Pepe Altarelli, Monica; Perazzini, Stefano; Perret, Pascal; Perrin-Terrin, Mathieu; Pescatore, Luca; Pesen, Erhan; Pessina, Gianluigi; Petridis, Konstantin; Petrolini, Alessandro; Picatoste Olloqui, Eduardo; Pietrzyk, Boleslaw; Pilař, Tomas; Pinci, Davide; Pistone, Alessandro; Playfer, Stephen; Plo Casasus, Maximo; Polci, Francesco; Poluektov, Anton; Polycarpo, Erica; Popov, Alexander; Popov, Dmitry; Popovici, Bogdan; Potterat, Cédric; Price, Eugenia; Price, Joseph David; Prisciandaro, Jessica; Pritchard, Adrian; Prouve, Claire; Pugatch, Valery; Puig Navarro, Albert; Punzi, Giovanni; Qian, Wenbin; Rachwal, Bartolomiej; Rademacker, Jonas; Rakotomiaramanana, Barinjaka; Rama, Matteo; Rangel, Murilo; Raniuk, Iurii; Rauschmayr, Nathalie; Raven, Gerhard; Redi, Federico; Reichert, Stefanie; Reid, Matthew; dos Reis, Alberto; Ricciardi, Stefania; Richards, Sophie; Rihl, Mariana; Rinnert, Kurt; Rives Molina, Vincente; Robbe, Patrick; Rodrigues, Ana Barbara; Rodrigues, Eduardo; Rodriguez Perez, Pablo; Roiser, Stefan; Romanovsky, Vladimir; Romero Vidal, Antonio; Rotondo, Marcello; Rouvinet, Julien; Ruf, Thomas; Ruiz, Hugo; Ruiz Valls, Pablo; Saborido Silva, Juan Jose; Sagidova, Naylya; Sail, Paul; Saitta, Biagio; Salustino Guimaraes, Valdir; Sanchez Mayordomo, Carlos; Sanmartin Sedes, Brais; Santacesaria, Roberta; Santamarina Rios, Cibran; Santovetti, Emanuele; Sarti, Alessio; Satriano, Celestina; Satta, Alessia; Saunders, Daniel Martin; Savrina, Darya; Schiller, Manuel; Schindler, Heinrich; Schlupp, Maximilian; Schmelling, Michael; Schmidt, Burkhard; Schneider, Olivier; Schopper, Andreas; Schubiger, Maxime; Schune, Marie Helene; Schwemmer, Rainer; Sciascia, Barbara; Sciubba, Adalberto; Semennikov, Alexander; Sepp, Indrek; Serra, Nicola; Serrano, Justine; Sestini, Lorenzo; Seyfert, Paul; Shapkin, Mikhail; Shapoval, Illya; Shcheglov, Yury; Shears, Tara; Shekhtman, Lev; Shevchenko, Vladimir; Shires, Alexander; Silva Coutinho, Rafael; Simi, Gabriele; Sirendi, Marek; Skidmore, Nicola; Skwarnicki, Tomasz; Smith, Anthony; Smith, Edmund; Smith, Eluned; Smith, Jackson; Smith, Mark; Snoek, Hella; Sokoloff, Michael; Soler, Paul; Soomro, Fatima; Souza, Daniel; Souza De Paula, Bruno; Spaan, Bernhard; Sparkes, Ailsa; Spradlin, Patrick; Sridharan, Srikanth; Stagni, Federico; Stahl, Marian; Stahl, Sascha; Steinkamp, Olaf; Stenyakin, Oleg; Stevenson, Scott; Stoica, Sabin; Stone, Sheldon; Storaci, Barbara; Stracka, Simone; Straticiuc, Mihai; Straumann, Ulrich; Stroili, Roberto; Subbiah, Vijay Kartik; Sun, Liang; Sutcliffe, William; Swientek, Krzysztof; Swientek, Stefan; Syropoulos, Vasileios; Szczekowski, Marek; Szczypka, Paul; Szumlak, Tomasz; T'Jampens, Stephane; Teklishyn, Maksym; Tellarini, Giulia; Teubert, Frederic; Thomas, Christopher; Thomas, Eric; van Tilburg, Jeroen; Tisserand, Vincent; Tobin, Mark; Tolk, Siim; Tomassetti, Luca; Tonelli, Diego; Topp-Joergensen, Stig; Torr, Nicholas; Tournefier, Edwige; Tourneur, Stephane; Tran, Minh Tâm; Tresch, Marco; Trisovic, Ana; Tsaregorodtsev, Andrei; Tsopelas, Panagiotis; Tuning, Niels; Ubeda Garcia, Mario; Ukleja, Artur; Ustyuzhanin, Andrey; Uwer, Ulrich; Vacca, Claudia; Vagnoni, Vincenzo; Valenti, Giovanni; Vallier, Alexis; Vazquez Gomez, Ricardo; Vazquez Regueiro, Pablo; Vázquez Sierra, Carlos; Vecchi, Stefania; Velthuis, Jaap; Veltri, Michele; Veneziano, Giovanni; Vesterinen, Mika; Viaud, Benoit; Vieira, Daniel; Vieites Diaz, Maria; Vilasis-Cardona, Xavier; Vollhardt, Achim; Volyanskyy, Dmytro; Voong, David; Vorobyev, Alexey; Vorobyev, Vitaly; Voß, Christian; de Vries, Jacco; Waldi, Roland; Wallace, Charlotte; Wallace, Ronan; Walsh, John; Wandernoth, Sebastian; Wang, Jianchun; Ward, David; Watson, Nigel; Websdale, David; Whitehead, Mark; Wicht, Jean; Wiedner, Dirk; Wilkinson, Guy; Williams, Matthew; Williams, Mike; Wilschut, Hans; Wilson, Fergus; Wimberley, Jack; Wishahi, Julian; Wislicki, Wojciech; Witek, Mariusz; Wormser, Guy; Wotton, Stephen; Wright, Simon; Wyllie, Kenneth; Xie, Yuehong; Xing, Zhou; Xu, Zhirui; Yang, Zhenwei; Yuan, Xuhao; Yushchenko, Oleg; Zangoli, Maria; Zavertyaev, Mikhail; Zhang, Liming; Zhang, Wen Chao; Zhang, Yanxi; Zhelezov, Alexey; Zhokhov, Anatoly; Zhong, Liang; Zvyagin, Alexander

    2014-12-05

    Measuring cross-sections at the LHC requires the luminosity to be determined accurately at each centre-of-mass energy $\\sqrt{s}$. In this paper results are reported from the luminosity calibrations carried out at the LHC interaction point 8 with the LHCb detector for $\\sqrt{s}$ = 2.76, 7 and 8 TeV (proton-proton collisions) and for $\\sqrt{s_{NN}}$ = 5 TeV (proton-lead collisions). Both the "van der Meer scan" and "beam-gas imaging" luminosity calibration methods were employed. It is observed that the beam density profile cannot always be described by a function that is factorizable in the two transverse coordinates. The introduction of a two-dimensional description of the beams improves significantly the consistency of the results. For proton-proton interactions at $\\sqrt{s}$ = 8 TeV a relative precision of the luminosity calibration of 1.47% is obtained using van der Meer scans and 1.43% using beam-gas imaging, resulting in a combined precision of 1.12%. Applying the calibration to the full data set determin...

  17. Precision Cosmology

    Science.gov (United States)

    Jones, Bernard J. T.

    2017-04-01

    Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.

  18. Applications of an automated stem measurer for precision forestry

    Science.gov (United States)

    N. Clark

    2001-01-01

    Accurate stem measurements are required for the determination of many silvicultural prescriptions, i.e., what are we going to do with a stand of trees. This would only be amplified in a precision forestry context. Many methods have been proposed for optimal ways to evaluate stems for a variety of characteristics. These methods usually involve the acquisition of total...

  19. Advanced bioanalytics for precision medicine.

    Science.gov (United States)

    Roda, Aldo; Michelini, Elisa; Caliceti, Cristiana; Guardigli, Massimo; Mirasoli, Mara; Simoni, Patrizia

    2018-01-01

    Precision medicine is a new paradigm that combines diagnostic, imaging, and analytical tools to produce accurate diagnoses and therapeutic interventions tailored to the individual patient. This approach stands in contrast to the traditional "one size fits all" concept, according to which researchers develop disease treatments and preventions for an "average" patient without considering individual differences. The "one size fits all" concept has led to many ineffective or inappropriate treatments, especially for pathologies such as Alzheimer's disease and cancer. Now, precision medicine is receiving massive funding in many countries, thanks to its social and economic potential in terms of improved disease prevention, diagnosis, and therapy. Bioanalytical chemistry is critical to precision medicine. This is because identifying an appropriate tailored therapy requires researchers to collect and analyze information on each patient's specific molecular biomarkers (e.g., proteins, nucleic acids, and metabolites). In other words, precision diagnostics is not possible without precise bioanalytical chemistry. This Trend article highlights some of the most recent advances, including massive analysis of multilayer omics, and new imaging technique applications suitable for implementing precision medicine. Graphical abstract Precision medicine combines bioanalytical chemistry, molecular diagnostics, and imaging tools for performing accurate diagnoses and selecting optimal therapies for each patient.

  20. High Precision Edge Detection Algorithm for Mechanical Parts

    Directory of Open Access Journals (Sweden)

    Duan Zhenyun

    2018-04-01

    Full Text Available High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.

  1. High-precision relative position and attitude measurement for on-orbit maintenance of spacecraft

    Science.gov (United States)

    Zhu, Bing; Chen, Feng; Li, Dongdong; Wang, Ying

    2018-02-01

    In order to realize long-term on-orbit running of satellites, space stations, etc spacecrafts, in addition to the long life design of devices, The life of the spacecraft can also be extended by the on-orbit servicing and maintenance. Therefore, it is necessary to keep precise and detailed maintenance of key components. In this paper, a high-precision relative position and attitude measurement method used in the maintenance of key components is given. This method mainly considers the design of the passive cooperative marker, light-emitting device and high resolution camera in the presence of spatial stray light and noise. By using a series of algorithms, such as background elimination, feature extraction, position and attitude calculation, and so on, the high precision relative pose parameters as the input to the control system between key operation parts and maintenance equipment are obtained. The simulation results show that the algorithm is accurate and effective, satisfying the requirements of the precision operation technique.

  2. Precise measurement of cat patellofemoral joint surface geometry with multistation digital photogrammetry.

    Science.gov (United States)

    Ronsky, J L; Boyd, S K; Lichti, D D; Chapman, M A; Salkauskas, K

    1999-04-01

    Three-dimensional joint models are important tools for investigating mechanisms related to normal and pathological joints. Often these models necessitate accurate three-dimensional joint surface geometric data so that reliable model results can be obtained; however, in models based on small joints, this is often problematic due to limitations of the present techniques. These limitations include insufficient measurement precision the requirement of contact for the measurement process, and lack of entire joint description. This study presents a new non-contact method for precise determination of entire joint surfaces using multistation digital photogrammetry (MDPG) and is demonstrated by determining the cartilage and subchondral bone surfaces of the cat patellofemoral (PF) joint. The digital camera-lens setup was precisely calibrated using 16 photographs arranged to achieve highly convergent geometry to estimate interior and distortion parameters of the camera-lens setup. Subsequently, six photographs of each joint surface were then acquired for surface measurement. The digital images were directly imported to a computer and newly introduced semi-automatic computer algorithms were used to precisely determine the image coordinates. Finally, a rigorous mathematical procedure named the bundle adjustment was used to determine the three-dimensional coordinates of the joint surfaces and to estimate the precision of the coordinates. These estimations were validated by comparing the MDPG measurements of a cylinder and plane to an analytical model. The joint surfaces were successfully measured using the MDPG method with mean precision estimates in the least favorable coordinate direction being 10.3 microns for subchondral bone and 17.9 microns for cartilage. The difference in measurement precision for bone and cartilage primarily reflects differences in the translucent properties of the surfaces.

  3. Evaluating the precision of passive sampling methods using ...

    Science.gov (United States)

    To assess these models, four different thicknesses of low-density polyethylene (LDPE) passive samplers were co-deployed for 28 days in the water column at three sites in New Bedford Harbor, MA, USA. Each sampler was pre-loaded with six PCB performance reference compounds (PRCs) to assess equilibrium status, such that the percent of PRC lost would range depending on PRC and LDPE thickness. These data allow subsequent Cfree comparisons to be made in two ways: (1) comparing Cfree derived from one thickness using different models and (2) comparing Cfree derived from the same model using different thicknesses of LDPE. Following the deployments, the percent of PRC lost ranged from 0-100%. As expected, fractional equilibrium decreased with increasing PRC molecular weight as well as sampler thickness. Overall, a total of 27 PCBs (log KOW ranging from 5.07 – 8.09) were measured at Cfree concentrations varying from 0.05 pg/L (PCB 206) to about 200 ng/L (PCB 28) on a single LDPE sampler. Relative standard deviations (RSDs) for total PCB measurements using the same thickness and varying model types range from 0.04-12% and increased with sampler thickness. Total PCB RSD for measurements using the same model and varying thickness ranged from: 6 – 30%. No RSD trends between models were observed but RSD did increase as Cfree decreased. These findings indicate that existing models yield precise and reproducible results when using LDPE and PRCs to measure Cfree. This work in

  4. Calibration of the precision high voltage dividers of the KATRIN experiment

    Energy Technology Data Exchange (ETDEWEB)

    Rest, Oliver [Institut fuer Kernphysik, Westfaelische Wilhelms-Universitaet Muenster (Germany); Collaboration: KATRIN-Collaboration

    2016-07-01

    The KATRIN (KArlsruhe TRItium Neutrino) experiment will measure the endpoint region of the tritium β decay spectrum to determine the neutrino mass with a sensitivity of 200 meV/c{sup 2}. To achieve this sub-eV sensitivity the energy of the decay electrons will be analyzed using a MAC-E type spectrometer. The retarding potential of the MAC-E-filter (up to -35 kV) has to be monitored with a relative precision of 3 . 10{sup -6}. For this purpose the potential will be measured directly via two custom made precision high voltage dividers, which were developed and constructed in cooperation with the Physikalisch-Technische Bundesanstalt Braunschweig. In order to determine the absolute values and the stability of the scale factors of the voltage dividers, regular calibration measurements are essential. Such measurements have been performed during the last years using several different methods. The poster gives an overview of the methods and results of the calibration of the precision high voltage dividers.

  5. Precise Documentation: The Key to Better Software

    Science.gov (United States)

    Parnas, David Lorge

    The prime cause of the sorry “state of the art” in software development is our failure to produce good design documentation. Poor documentation is the cause of many errors and reduces efficiency in every phase of a software product's development and use. Most software developers believe that “documentation” refers to a collection of wordy, unstructured, introductory descriptions, thousands of pages that nobody wanted to write and nobody trusts. In contrast, Engineers in more traditional disciplines think of precise blueprints, circuit diagrams, and mathematical specifications of component properties. Software developers do not know how to produce precise documents for software. Software developments also think that documentation is something written after the software has been developed. In other fields of Engineering much of the documentation is written before and during the development. It represents forethought not afterthought. Among the benefits of better documentation would be: easier reuse of old designs, better communication about requirements, more useful design reviews, easier integration of separately written modules, more effective code inspection, more effective testing, and more efficient corrections and improvements. This paper explains how to produce and use precise software documentation and illustrate the methods with several examples.

  6. New Insights of High-precision Asteroseismology: Acoustic Radius and χ2-matching Method for Solar-like Oscillator KIC 6225718

    Directory of Open Access Journals (Sweden)

    Wu Tao

    2017-01-01

    parameters. In the present work, we adopt the χ2-minimization method but only use the observed high-precision seismic observations (i.e., oscillation frequencies to constrain theoretical models for analyzing solar-like oscillator KIC 6225718. Finally, we find the acoustic radius τ0 is the only global parameter that can be accurately measured by the χ2-matching method between observed frequencies and theoretical model calculations for a pure p-mode oscillation star. We obtain τ0=4601.5−8.3+4.4 seconds for KIC 6225718. It leads that the mass and radius of the CMMs are degenerate with each other. In addition, we find that the distribution range of acoustic radius is slightly enlarged by some extreme cases, which posses both a larger mass and a higher (or lower metal abundance, at the lower acoustic radius end.

  7. A unified modeling and control design for precision transmission system with friction and backlash

    Directory of Open Access Journals (Sweden)

    Xiulan Bao

    2016-05-01

    Full Text Available The structural flexibility, nonlinear friction, and backlash are the major factors limiting the control performance of precision transmission systems. If uncompensated, these factors compromise the positioning and tracking accuracy of precision transmission systems and even cause limit cycles and oscillation. In this article, a framework for integrated design from dynamic modeling to controller design is proposed. A multi-state dynamic model is presented, which can unify the modeling for a multi-state, discontinuous system including the motor state, the motion state, the mechanical contact state, and the friction state. Then, a control design method related to the dynamic modeling using perturbation separation of the model parameters is presented. Using the proposed modeling method, a continuous dynamic model is established to include all different partition models. The model comprehensively describes the mechanical and electrical characteristics of the precision transmission system. A robust controller is designed using the proposed control method. Experimental results demonstrate that the proposed modeling method is accurate and the proposed control method significantly improves accuracy and robustness of the controller compared to traditional control methods.

  8. PSYCHE CPMG-HSQMBC: An NMR Spectroscopic Method for Precise and Simple Measurement of Long-Range Heteronuclear Coupling Constants.

    Science.gov (United States)

    Timári, István; Szilágyi, László; Kövér, Katalin E

    2015-09-28

    Among the NMR spectroscopic parameters, long-range heteronuclear coupling constants convey invaluable information on torsion angles relevant to glycosidic linkages of carbohydrates. A broadband homonuclear decoupled PSYCHE CPMG-HSQMBC method for the precise and direct measurement of multiple-bond heteronuclear couplings is presented. The PSYCHE scheme built into the pulse sequence efficiently eliminates unwanted proton-proton splittings from the heteronuclear multiplets so that the desired heteronuclear couplings can be determined simply by measuring frequency differences between peak maxima of pure antiphase doublets. Moreover, PSYCHE CPMG-HSQMBC can provide significant improvement in sensitivity as compared to an earlier Zangger-Sterk-based method. Applications of the proposed pulse sequence are demonstrated for the extraction of (n)J((1)H,(77)Se) and (n)J((1)H,(13)C) values, respectively, in carbohydrates; further extensions can be envisioned in any J-based structural and conformational studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A comparison of manual anthropometric measurements with Kinect-based scanned measurements in terms of precision and reliability.

    Science.gov (United States)

    Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina

    2018-01-01

    Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.

  10. Improving precision in gel electrophoresis by stepwisely decreasing variance components.

    Science.gov (United States)

    Schröder, Simone; Brandmüller, Asita; Deng, Xi; Ahmed, Aftab; Wätzig, Hermann

    2009-10-15

    Many methods have been developed in order to increase selectivity and sensitivity in proteome research. However, gel electrophoresis (GE) which is one of the major techniques in this area, is still known for its often unsatisfactory precision. Percental relative standard deviations (RSD%) up to 60% have been reported. In this case the improvement of precision and sensitivity is absolutely essential, particularly for the quality control of biopharmaceuticals. Our work reflects the remarkable and completely irregular changes of the background signal from gel to gel. This irregularity was identified as one of the governing error sources. These background changes can be strongly reduced by using a signal detection in the near-infrared (NIR) range. This particular detection method provides the most sensitive approach for conventional CCB (Colloidal Coomassie Blue) stained gels, which is reflected in a total error of just 5% (RSD%). In order to further investigate variance components in GE, an experimental Plackett-Burman screening design was performed. The influence of seven potential factors on the precision was investigated using 10 proteins with different properties analyzed by NIR detection. The results emphasized the individuality of the proteins. Completely different factors were identified to be significant for each protein. However, out of seven investigated parameters, just four showed a significant effect on some proteins, namely the parameters of: destaining time, staining temperature, changes of detergent additives (SDS and LDS) in the sample buffer, and the age of the gels. As a result, precision can only be improved individually for each protein or protein classes. Further understanding of the unique properties of proteins should enable us to improve the precision in gel electrophoresis.

  11. SpineAnalyzer™ is an accurate and precise method of vertebral fracture detection and classification on dual-energy lateral vertebral assessment scans

    International Nuclear Information System (INIS)

    Birch, C.; Knapp, K.; Hopkins, S.; Gallimore, S.; Rock, B.

    2015-01-01

    Osteoporotic fractures of the spine are associated with significant morbidity, are highly predictive of hip fractures, but frequently do not present clinically. When there is a low to moderate clinical suspicion of vertebral fracture, which would not justify acquisition of a radiograph, vertebral fracture assessment (VFA) using Dual-energy X-ray Absorptiometry (DXA) offers a low-dose opportunity for diagnosis. Different approaches to the classification of vertebral fractures have been documented. The aim of this study was to measure the precision and accuracy of SpineAnalyzer™, a quantitative morphometry software program. Lateral vertebral assessment images of 64 men were analysed using SpineAnalyzer™ and standard GE Lunar software. The images were also analysed by two expert readers using a semi-quantitative approach. Agreement between groups ranged from 95.99% to 98.60%. The intra-rater precision for the application of SpineAnalyzer™ to vertebrae was poor in the upper thoracic regions, but good elsewhere. SpineAnalyzer™ is a reproducible and accurate method for measuring vertebral height and quantifying vertebral fractures from VFA scans. - Highlights: • Vertebral fracture assessment (VFA) using Dual-energy X-ray Absorptiometry (DXA) offers a low-dose opportunity for diagnosis. • Agreement between VFA software (SpineAnalyzer™) and expert readers is high. • Intra-rater precision of SpineAnalyzer™ applied to upper thoracic vertebrae is poor, but good elsewhere. • SpineAnalyzer™ is reproducible and accurate for vertebral height measurement and fracture quantification from VFA scans

  12. Impact of PET/CT system, reconstruction protocol, data analysis method, and repositioning on PET/CT precision: An experimental evaluation using an oncology and brain phantom.

    Science.gov (United States)

    Mansor, Syahir; Pfaehler, Elisabeth; Heijtel, Dennis; Lodge, Martin A; Boellaard, Ronald; Yaqub, Maqsood

    2017-12-01

    In longitudinal oncological and brain PET/CT studies, it is important to understand the repeatability of quantitative PET metrics in order to assess change in tracer uptake. The present studies were performed in order to assess precision as function of PET/CT system, reconstruction protocol, analysis method, scan duration (or image noise), and repositioning in the field of view. Multiple (repeated) scans have been performed using a NEMA image quality (IQ) phantom and a 3D Hoffman brain phantom filled with 18 F solutions on two systems. Studies were performed with and without randomly (PET/CT, especially in the case of smaller spheres (PET metrics depends on the combination of reconstruction protocol, data analysis methods and scan duration (scan statistics). Moreover, precision was also affected by phantom repositioning but its impact depended on the data analysis method in combination with the reconstructed voxel size (tissue fraction effect). This study suggests that for oncological PET studies the use of SUV peak may be preferred over SUV max because SUV peak is less sensitive to patient repositioning/tumor sampling. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. Statistical precision of delayed-neutron nondestructive assay techniques

    International Nuclear Information System (INIS)

    Bayne, C.K.; McNeany, S.R.

    1979-02-01

    A theoretical analysis of the statistical precision of delayed-neutron nondestructive assay instruments is presented. Such instruments measure the fissile content of nuclear fuel samples by neutron irradiation and delayed-neutron detection. The precision of these techniques is limited by the statistical nature of the nuclear decay process, but the precision can be optimized by proper selection of system operating parameters. Our method is a three-part analysis. We first present differential--difference equations describing the fundamental physics of the measurements. We then derive and present complete analytical solutions to these equations. Final equations governing the expected number and variance of delayed-neutron counts were computer programmed to calculate the relative statistical precision of specific system operating parameters. Our results show that Poisson statistics do not govern the number of counts accumulated in multiple irradiation-count cycles and that, in general, maximum count precision does not correspond with maximum count as first expected. Covariance between the counts of individual cycles must be considered in determining the optimum number of irradiation-count cycles and the optimum irradiation-to-count time ratio. For the assay system in use at ORNL, covariance effects are small, but for systems with short irradiation-to-count transition times, covariance effects force the optimum number of irradiation-count cycles to be half those giving maximum count. We conclude that the equations governing the expected value and variance of delayed-neutron counts have been derived in closed form. These have been computerized and can be used to select optimum operating parameters for delayed-neutron assay devices

  14. Development and simulation of microfluidic Wheatstone bridge for high-precision sensor

    International Nuclear Information System (INIS)

    Shipulya, N D; Konakov, S A; Krzhizhanovskaya, V V

    2016-01-01

    In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems. (paper)

  15. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    Science.gov (United States)

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  16. Ultra-wideband ranging precision and accuracy

    International Nuclear Information System (INIS)

    MacGougan, Glenn; O'Keefe, Kyle; Klukas, Richard

    2009-01-01

    This paper provides an overview of ultra-wideband (UWB) in the context of ranging applications and assesses the precision and accuracy of UWB ranging from both a theoretical perspective and a practical perspective using real data. The paper begins with a brief history of UWB technology and the most current definition of what constitutes an UWB signal. The potential precision of UWB ranging is assessed using Cramer–Rao lower bound analysis. UWB ranging methods are described and potential error sources are discussed. Two types of commercially available UWB ranging radios are introduced which are used in testing. Actual ranging accuracy is assessed from line-of-sight testing under benign signal conditions by comparison to high-accuracy electronic distance measurements and to ranges derived from GPS real-time kinematic positioning. Range measurements obtained in outdoor testing with line-of-sight obstructions and strong reflection sources are compared to ranges derived from classically surveyed positions. The paper concludes with a discussion of the potential applications for UWB ranging

  17. Precision of hyaline cartilage thickness measurements

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, K.; Buckwalter, K.; Helvie, M.; Niklason, L.; Martel, W. (Univ. of Michigan Hospitals, Ann Arbor, MI (United States). Dept. of Radiology)

    1992-05-01

    Measurement of cartilage thickness in vivo is an important indicator of the status of a joint as the various degenerative and inflammatory arthritides directly affect the condition of the cartilage. In order to assess the precision of thickness measurements of hyaline articular cartilage, we undertook a pilot study using MR imaging, plain radiography, and ultrasonography (US). We measured the cartilage of the hip and knee joints in 10 persons (4 healthy volunteers and 6 patients). The joints in each patient were examined on two separate occasions using each modality. In the hips a swell as the knee joints, the most precise measuring method was plain film radiography. For radiographs of the knees obtained in the standing position, the coefficient of variation was 6.5%; in the hips this figure was 6.34%. US of the knees and MR imaging of the hips were the second best modalities in the measurement of cartilage thickness. In addition, MR imaging enabled the most complete visualization of the joint cartilage. (orig.).

  18. An assessment of the precision and confidence of aquatic eddy correlation measurements

    DEFF Research Database (Denmark)

    Donis, Daphne; Holtappels, Moritz; Noss, Christian

    2015-01-01

    facility with well-constrained hydrodynamics. These observations are used to review data processing procedures and to recommend improved deployment methods, thus improving the precision, reliability, and confidence of EC measurements. Specifically, this study demonstrates that 1) the alignment of the time...... series based on maximum cross correlation improved the precision of EC flux estimations; 2) an oxygen sensor with a response time of

  19. Laser precision microfabrication

    CERN Document Server

    Sugioka, Koji; Pique, Alberto

    2010-01-01

    Miniaturization and high precision are rapidly becoming a requirement for many industrial processes and products. As a result, there is greater interest in the use of laser microfabrication technology to achieve these goals. This book composed of 16 chapters covers all the topics of laser precision processing from fundamental aspects to industrial applications to both inorganic and biological materials. It reviews the sate of the art of research and technological development in the area of laser processing.

  20. High precision electrostatic potential calculations for cylindrically symmetric lenses

    International Nuclear Information System (INIS)

    Edwards, David Jr.

    2007-01-01

    A method is developed for a potential calculation within cylindrically symmetric electrostatic lenses using mesh relaxation techniques, and it is capable of considerably higher accuracies than currently available. The method involves (i) creating very high order algorithms (orders of 6, 8, and 10) for determining the potentials at points in the net using surrounding point values, (ii) eliminating the effect of the large errors caused by singular points, and (iii) reducing gradients in the high gradient regions of the geometry, thereby allowing the algorithms used in these regions to achieve greater precisions--(ii) and (iii) achieved by the use of telescopic multiregions. In addition, an algorithm for points one unit from a metal surface is developed, allowing general mesh point algorithms to be used in these situations, thereby taking advantage of the enhanced precision of the latter. A maximum error function dependent on a sixth order gradient of the potential is defined. With this the single point algorithmic errors are able to be viewed over the entire net. Finally, it is demonstrated that by utilizing the above concepts and procedures, the potential of a point in a reasonably high gradient region of a test geometry can realize a precision of less than 10 -10

  1. Precision enhancement of pavement roughness localization with connected vehicles

    International Nuclear Information System (INIS)

    Bridgelall, R; Huang, Y; Zhang, Z; Deng, F

    2016-01-01

    Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers. (paper)

  2. Effects of X-ray tube parameters on thickness measure precision in X-ray profile gauge

    International Nuclear Information System (INIS)

    Miao Jichen; Wu Zhifang; Xing Guilai

    2011-01-01

    Instantaneous profile gauge technology has been widely used in metallurgy industry because it can on-line get the profile of steel strip. It has characters of high measure precision and wide measure range, but the X-ray tube parameters only can be set few different values during measurement. The relations of thickness measure precision and X-ray tube current, X-ray tube voltage were analyzed. The results show that the X-ray tube current affects the thickness measure precision and the X-ray tube voltage determines the thickness measure range. The method of estimating the X-ray current by thickness measure precision was provided in the end. This method is the base of X-ray source selection and X-ray source parameter's setting in the instantaneous profile gauge. (authors)

  3. Calibration apparatus for precise barometers and vacuum gauges

    International Nuclear Information System (INIS)

    Woo, S.Y.; Choi, I.M.; Lee, Y.J.; Hong, S.S.; Chung, K.H.

    2004-01-01

    In order to calibrate highly accurate absolute pressure gauges, such as barometers and vacuum gauges, laser, or ultrasonic mercury manometers have been used. However, the complexity and cost of manometers have greatly reduced the use of this method in most calibration laboratories. As a substitute, a gas-operated pressure balance is used to calibrate precise gauges. In such cases, many commercially available pressure balances are unsuitable because the necessary exposure of the piston, cylinder, and masses to the atmosphere causes contamination problems and allows dust particles into the gap between the piston and cylinder. In this article, a weight-loading device is described that allows masses to be changed in situ without breaking the vacuum. This device makes it possible to add or remove weights easily during the calibration, thereby greatly reducing the time between observations. Using this device, we efficiently calibrated a precise quartz resonance barometer (Paroscientific, model 760-16B) over a pressure range of 940-1050 h Pa and a precise vacuum gauge (MKS, CDG 100 Torr) over a pressure range of 0-100 h Pa

  4. Precise Mapping Of A Spatially Distributed Radioactive Source

    International Nuclear Information System (INIS)

    Beck, A.; Caras, I.; Piestum, S.; Sheli, E.; Melamud, Y.; Berant, S.; Kadmon, Y.; Tirosh, D.

    1999-01-01

    Spatial distribution measurement of radioactive sources is a routine task in the nuclear industry. The precision of each measurement depends upon the specific application. However, the technological edge of this precision is motivated by the production of standards for calibration. Within this definition, the most demanding field is the calibration of standards for medical equipment. In this paper, a semi-empirical method for controlling the measurement precision is demonstrated, using a relatively simple laboratory apparatus. The spatial distribution of the source radioactivity is measured as part of the quality assurance tests, during the production of flood sources. These sources are further used in calibration of medical gamma cameras. A typical flood source is a 40 x 60 cm 2 plate with an activity of 10 mCi (or more) of 57 Co isotope. The measurement set-up is based on a single NaI(Tl) scintillator with a photomultiplier tube, moving on an X Y table which scans the flood source. In this application the source is required to have a uniform activity distribution over its surface

  5. Precision luminosity measurement at LHCb with beam-gas imaging

    International Nuclear Information System (INIS)

    Barschel, Colin

    2014-01-01

    The luminosity is the physical quantity which relates the cross-section to the production rate in collider experiments. The cross-section being the particle physics observable of interest, a precise determination of the luminosity is required. This work presents the absolute luminosity calibration results performed at the Large Hadron Collider beauty (LHCb) experiment at CERN using a novel method based on beam-gas interactions with data acquired at a center of mass energy √(s)=8 TeV and √(s)=2.76 TeV. Reconstructed beam-gas interaction vertices in LHCb are used to measure the beam profiles, thus making it possible to determine the beams overlap integral. An important element of this work was to install and use a neon gas injection system to increase the beam-gas interaction rate. The precision reached with the beam-gas imaging method relies on the two-dimensional beam shape determination developed in this work. For such precision, the interaction vertex resolution is an important ingredient. Therefore, a new method has been developed using all reconstructed vertices in order to improve the understanding of the vertex resolution. In addition to the overlap integral, the knowledge of the colliding bunch populations is required to measure the luminosity. The determination of the bunch populations relies on LHC instruments to measure the bunch population fractions and the total beam intensity. Studies performed as part of this work resulted in a reduction of the bunch current normalization uncertainty from ±2.7% to ±0.2% and making it possible to achieve precision luminosity measurements at all LHC experiments. Furthermore, information on beam-gas interactions not originating from nominally filled bunches was analyzed to determine the charge fraction not participating in bunch collisions. The knowledge of this fraction is required to correct the total beam intensity. The reference cross-section of pp interactions with at least two tracks in the vertex detector

  6. Precision Medicine and Men's Health.

    Science.gov (United States)

    Mata, Douglas A; Katchi, Farhan M; Ramasamy, Ranjith

    2017-07-01

    Precision medicine can greatly benefit men's health by helping to prevent, diagnose, and treat prostate cancer, benign prostatic hyperplasia, infertility, hypogonadism, and erectile dysfunction. For example, precision medicine can facilitate the selection of men at high risk for prostate cancer for targeted prostate-specific antigen screening and chemoprevention administration, as well as assist in identifying men who are resistant to medical therapy for prostatic hyperplasia, who may instead require surgery. Precision medicine-trained clinicians can also let couples know whether their specific cause of infertility should be bypassed by sperm extraction and in vitro fertilization to prevent abnormalities in their offspring. Though precision medicine's role in the management of hypogonadism has yet to be defined, it could be used to identify biomarkers associated with individual patients' responses to treatment so that appropriate therapy can be prescribed. Last, precision medicine can improve erectile dysfunction treatment by identifying genetic polymorphisms that regulate response to medical therapies and by aiding in the selection of patients for further cardiovascular disease screening.

  7. Precision muonium spectroscopy

    International Nuclear Information System (INIS)

    Jungmann, Klaus P.

    2016-01-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s–2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium–antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter. (author)

  8. New Insights of High-precision Asteroseismology: Acoustic Radius and χ2-matching Method for Solar-like Oscillator KIC 6225718

    Science.gov (United States)

    Wu, Tao; Li, Yan

    2017-10-01

    Asteroseismology is a powerful tool for probing stellar interiors and determining stellar fundamental parameters. In the present work, we adopt the χ2-minimization method but only use the observed high-precision seismic observations (i.e., oscillation frequencies) to constrain theoretical models for analyzing solar-like oscillator KIC 6225718. Finally, we find the acoustic radius τ0 is the only global parameter that can be accurately measured by the χ2-matching method between observed frequencies and theoretical model calculations for a pure p-mode oscillation star. We obtain seconds for KIC 6225718. It leads that the mass and radius of the CMMs are degenerate with each other. In addition, we find that the distribution range of acoustic radius is slightly enlarged by some extreme cases, which posses both a larger mass and a higher (or lower) metal abundance, at the lower acoustic radius end.

  9. Development of new methods in modern selective organic synthesis: preparation of functionalized molecules with atomic precision

    International Nuclear Information System (INIS)

    Ananikov, V P; Khemchyan, L L; Ivanova, Yu V; Dilman, A D; Levin, V V; Bukhtiyarov, V I; Sorokin, A M; Prosvirin, I P; Romanenko, A V; Simonov, P A; Vatsadze, S Z; Medved'ko, A V; Nuriev, V N; Nenajdenko, V G; Shmatova, O I; Muzalevskiy, V M; Koptyug, I V; Kovtunov, K V; Zhivonitko, V V; Likholobov, V A

    2014-01-01

    The challenges of the modern society and the growing demand of high-technology sectors of industrial production bring about a new phase in the development of organic synthesis. A cutting edge of modern synthetic methods is introduction of functional groups and more complex structural units into organic molecules with unprecedented control over the course of chemical transformation. Analysis of the state-of-the-art achievements in selective organic synthesis indicates the appearance of a new trend — the synthesis of organic molecules, biologically active compounds, pharmaceutical substances and smart materials with absolute selectivity. Most advanced approaches to organic synthesis anticipated in the near future can be defined as 'atomic precision' in chemical reactions. The present review considers selective methods of organic synthesis suitable for transformation of complex functionalized molecules under mild conditions. Selected key trends in the modern organic synthesis are considered including the preparation of organofluorine compounds, catalytic cross-coupling and oxidative cross-coupling reactions, atom-economic addition reactions, methathesis processes, oxidation and reduction reactions, synthesis of heterocyclic compounds, design of new homogeneous and heterogeneous catalytic systems, application of photocatalysis, scaling up synthetic procedures to industrial level and development of new approaches to investigation of mechanisms of catalytic reactions. The bibliography includes 840 references

  10. Precision engineering: an evolutionary perspective.

    Science.gov (United States)

    Evans, Chris J

    2012-08-28

    Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.

  11. High precision analysis of trace lithium isotope by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Tang Lei; Liu Xuemei; Long Kaiming; Liu Zhao; Yang Tianli

    2010-01-01

    High precision analysis method of ng lithium by thermal ionization mass spectrometry is developed. By double-filament measurement,phosphine acid ion enhancer and sample pre-baking technique,the precision of trace lithium analysis is improved. For 100 ng lithium isotope standard sample, relative standard deviation is better than 0.086%; for 10 ng lithium isotope standard sample, relative standard deviation is better than 0.90%. (authors)

  12. An Assessment of Imaging Informatics for Precision Medicine in Cancer.

    Science.gov (United States)

    Chennubhotla, C; Clarke, L P; Fedorov, A; Foran, D; Harris, G; Helton, E; Nordstrom, R; Prior, F; Rubin, D; Saltz, J H; Shalley, E; Sharma, A

    2017-08-01

    Objectives: Precision medicine requires the measurement, quantification, and cataloging of medical characteristics to identify the most effective medical intervention. However, the amount of available data exceeds our current capacity to extract meaningful information. We examine the informatics needs to achieve precision medicine from the perspective of quantitative imaging and oncology. Methods: The National Cancer Institute (NCI) organized several workshops on the topic of medical imaging and precision medicine. The observations and recommendations are summarized herein. Results: Recommendations include: use of standards in data collection and clinical correlates to promote interoperability; data sharing and validation of imaging tools; clinician's feedback in all phases of research and development; use of open-source architecture to encourage reproducibility and reusability; use of challenges which simulate real-world situations to incentivize innovation; partnership with industry to facilitate commercialization; and education in academic communities regarding the challenges involved with translation of technology from the research domain to clinical utility and the benefits of doing so. Conclusions: This article provides a survey of the role and priorities for imaging informatics to help advance quantitative imaging in the era of precision medicine. While these recommendations were drawn from oncology, they are relevant and applicable to other clinical domains where imaging aids precision medicine. Georg Thieme Verlag KG Stuttgart.

  13. Antihydrogen production and precision experiments

    International Nuclear Information System (INIS)

    Nieto, M.M.; Goldman, T.; Holzscheiter, M.H.

    1996-01-01

    The study of CPT invariance with the highest achievable precision in all particle sectors is of fundamental importance for physics. Equally important is the question of the gravitational acceleration of antimatter. In recent years, impressive progress has been achieved in capturing antiprotons in specially designed Penning traps, in cooling them to energies of a few milli-electron volts, and in storing them for hours in a small volume of space. Positrons have been accumulated in large numbers in similar traps, and low energy positron or positronium beams have been generated. Finally, steady progress has been made in trapping and cooling neutral atoms. Thus the ingredients to form antihydrogen at rest are at hand. Once antihydrogen atoms have been captured at low energy, spectroscopic methods can be applied to interrogate their atomic structure with extremely high precision and compare it to its normal matter counterpart, the hydrogen atom. Especially the 1S-2S transition, with a lifetime of the excited state of 122 msec and thereby a natural linewidth of 5 parts in 10 16 , offers in principle the possibility to directly compare matter and antimatter properties at a level of 1 part in 10 16

  14. Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.

    Science.gov (United States)

    Gardner, Brian; Grüning, André

    2016-01-01

    Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

  15. Toward precision medicine in Alzheimer's disease.

    Science.gov (United States)

    Reitz, Christiane

    2016-03-01

    In Western societies, Alzheimer's disease (AD) is the most common form of dementia and the sixth leading cause of death. In recent years, the concept of precision medicine, an approach for disease prevention and treatment that is personalized to an individual's specific pattern of genetic variability, environment and lifestyle factors, has emerged. While for some diseases, in particular select cancers and a few monogenetic disorders such as cystic fibrosis, significant advances in precision medicine have been made over the past years, for most other diseases precision medicine is only in its beginning. To advance the application of precision medicine to a wider spectrum of disorders, governments around the world are starting to launch Precision Medicine Initiatives, major efforts to generate the extensive scientific knowledge needed to integrate the model of precision medicine into every day clinical practice. In this article we summarize the state of precision medicine in AD, review major obstacles in its development, and discuss its benefits in this highly prevalent, clinically and pathologically complex disease.

  16. High precision capacitive beam phase probe for KHIMA project

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ji-Gwang, E-mail: windy206@hanmail.net [Korea Institute of Radiological and Medical Sciences, 215–4, Gongneung-dong, Nowon-t, Seoul 139–706 (Korea, Republic of); Yang, Tae-Keun [Korea Institute of Radiological and Medical Sciences, 215–4, Gongneung-dong, Nowon-t, Seoul 139–706 (Korea, Republic of); Forck, Peter [GSI Helmholtz Centre for Ion Research, Darmstadt 64291, German (Germany)

    2016-11-21

    In the medium energy beam transport (MEBT) line of KHIMA project, a high precision beam phase probe monitor is required for a precise tuning of RF phase and amplitude of Radio Frequency Quadrupole (RFQ) accelerator and IH-DTL linac. It is also used for measuring a kinetic energy of ion beam by time-of-flight (TOF) method using two phase probes. The capacitive beam phase probe has been developed. The electromagnetic design of the high precision phase probe was performed to satisfy the phase resolution of 1° (@200 MHz). It was confirmed by the test result using a wire test bench. The measured phase accuracy of the fabricated phase probe is 1.19 ps. The pre-amplifier electronics with the 0.125 ∼ 1.61 GHz broad-band was designed and fabricated for amplifying the signal strength. The results of RF frequency and beam energy measurement using a proton beam from the cyclotron in KIRAMS is presented.

  17. Applications of Laser Precisely Processing Technology in Solar Cells

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    According to the design method of laser resonator cavity, we optimized the primary parameters of resonator and utilized LD arrays symmetrically pumping manner to implementing output of the high-brightness laser in our laser cutter, then which was applied to precisely cutting the conductive film of CuInSe2 solar cells, the buried contact silicon solar cells' electrode groove, and perforating in wafer which is used to the emitter wrap through silicon solar cells. Laser processing precision was less than 40μm, the results have met solar cell's fabrication technology, and made finally the buried cells' conversion efficiency be improved from 18% to 21% .

  18. Improving GLONASS Precise Orbit Determination through Data Connection

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-12-01

    Full Text Available In order to improve the precision of GLONASS orbits, this paper presents a method to connect the data segments of a single station-satellite pair to increase the observation continuity and, consequently, the strength of the precise orbit determination (POD solution. In this method, for each GLONASS station-satellite pair, the wide-lane ambiguities derived from the Melbourne–Wübbena combination are statistically tested and corrected for phase integer offsets and then the same is carried out for the narrow-lane ambiguities calculated from the POD solution. An experimental validation was carried out using one-month GNSS data of a global network with 175 IGS stations. The result shows that, on average, 27.1% of the GLONASS station-satellite pairs with multiple data segments could be connected to a single long observation arc and, thus, only one ambiguity parameter was estimated. Using the connected data, the GLONASS orbit overlapping RMS at the day boundaries could be reduced by 19.2% in ideal cases with an averaged reduction of about 6.3%.

  19. A precise, efficient radiometric assay for bacterial growth

    International Nuclear Information System (INIS)

    Boonkitticharoen, V.; Ehrhardt, C.; Kirchner, P.T.

    1984-01-01

    The two-compartment radiometric assay for bacterial growth promised major advantages over systems in clinical use, but poor reproducibility and counting efficiency limited its application. In this method, 14-CO/sub 2/ produced by bacterial metabolism of C-14-glucose is trapped and counted on filter paper impregnated with NaOH and fluors. The authors sought to improve assay efficiency and precision through a systematic study of relevant physical and chemical factors. Improvements in efficiency (88% vs. 10%) and in precision (relative S.D. 5% vs. 40%) were produced by a) reversing growth medium and scintillator chambers to permit vigorous agitation, b) increasing NaOH quantity and using a supersaturated PPO solution and c) adding detergent to improve uniformity of NaOH-PPO mixture. Inoculum size, substrate concentration and O/sub 2/ transfer rate affected assay sensitivity but not bacterial growth rate. The authors' assay reliably detects bacterial growth for inocula of 10,000 organisms in 1 hour and for 25 organisms within 4 1/2 hours, thus surpassing other existing clinical and research methods

  20. Development of sensor guided precision sprayers

    NARCIS (Netherlands)

    Nieuwenhuizen, A.T.; Zande, van de J.C.

    2013-01-01

    Sensor guided precision sprayers were developed to automate the spray process with a focus on emission reduction and identical or increased efficacy, with the precision agriculture concept in mind. Within the project “Innovations2” sensor guided precision sprayers were introduced to leek,

  1. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

    International Nuclear Information System (INIS)

    Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

    2006-01-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

  2. Proceedings of the International Conference on Applications of High Precision Atomic and Nuclear Methods

    International Nuclear Information System (INIS)

    Olariu, Agata; Stenstroem, Kristina; Hellborg, Ragnar

    2005-01-01

    This volume presents the Proceedings of the International Conference on Applications of High Precision Atomic and Nuclear Methods, held in Neptun, Romania from 2nd to 6th of September 2002. The conference was organized by The Center of Excellence of the European Commission: Inter-Disciplinary Research and Applications based on Nuclear and Atomic Physics (IDRANAP) from Horia Hulubei National Institute for Physics and Nuclear Engineering, IFIN-HH, Bucharest-Magurele, Romania. The meeting gathered 66 participants from 25 different laboratories in 11 countries, namely: Belgium, Bulgaria, France, Germany, Hungary, Poland, Portugal, Romania, Slovakia and Sweden. Non European delegate came from Japan. The topics covered by the conference were as follows: - Environment: air, water and soil pollution, pollution with heavy elements and with radioisotopes, bio-monitoring (10 papers); - Radionuclide metrology (10 papers); - Ion beam based techniques for characterization of materials surface, ERDA, PIXE, PIGE, computer simulations, materials modifications, wear, corrosion (10 papers) ; - Accelerator Mass Spectrometry and applications in environment, archaeology, and medicine (7 papers); - Application of neutron spectrometry in condensed matter (1 paper); - Advanced techniques, facilities and applications (11). Seventeen invited speakers covered through overview talks the main parts of these topics. The book contains the overview talks, oral contributions and poster contributions

  3. Precision of anterior and posterior corneal curvature measurements taken with the Oculus Pentacam

    Directory of Open Access Journals (Sweden)

    Elizabeth Chetty

    2016-06-01

    Full Text Available In the era of rapid advances in technology, new ophthalmic instruments are constantly influencing health sciences and necessitating investigations of the accuracy and precision of the new technology. The Oculus Pentacam (70700 has been available for some time now and numerous studies have investigated the precision of some of the parameters that the Pentacam is capable of measuring. Unfortunately some of these studies fall short in confusing the meaning of accuracy and precision and in not being able to analyse the data correctly or completely. The aim of this study was to investigate the precision of the anterior and posterior corneal curvature measurements taken with the Oculus Pentacam (70700 holistically with sound multivariate statistical methods. Twenty successive Pentacam measurements were taken over three different measuring sessions on one subject. Keratometric data for both the anterior and posterior corneal surfaces were analysed using multivariate statistics to determine the precision of the Oculus Pentacam. This instrument was found to have good precision both clinically and statistically for anterior corneal measurements but only good clinical precision for the posterior corneal surface. Key words: Oculus Pentacam; keratometric variation; corneal curvature; multivariate statistics

  4. Precision of dual energy X-ray absorptiometry for body composition measurements in cats

    International Nuclear Information System (INIS)

    Borges, N.C.; Vasconcellos, R.S.; Canola, J.C.; Carciofi, A.C.; Pereira, G.T.; Paula, F.J.A.

    2008-01-01

    A short-term precision error of the individual subject and the DEXA technique, such as the effect of the repositioning of the cat on the examination table, were established. Four neutered adult cats (BW=4342 g) and three females (BW=3459 g) were submitted to five repeated scans with and without repositioning between them. Precision was estimated from the mean of the five measurements and expressed by the individual coefficient of variation (CV). The precision error of the technique was estimated by the variance of scan pool (n=35) and expressed in CV for the technique (CVt). The degrees of freedom and confidence intervals were determined to avoid underestimation of precision errors. Bone mineral content (BMC), lean mass (LM), and fat mass (FM) averages were higher (P<0.05) when animals were repositioned. The CVt was significantly higher (P<0.05) for bone mineral density (BMD), LM, and FM when the animals were repositioned. For short-term precision measurements, the repositioning of the animal was important to establish the precision of the technique. The dual energy xray absorptiometry method provided precision for body composition measurements in adult cats. (author)

  5. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    Science.gov (United States)

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  6. The High Road to Astronomical Photometric Precision : Differential Photometry

    NARCIS (Netherlands)

    Milone, E. F.; Pel, Jan Willem

    2011-01-01

    Differential photometry offers the most precise method for measuring the brightness of astronomical objects. We attempt to demonstrate why this should be the case, and then describe how well it has been done through a review of the application of differential techniques from the earliest visual

  7. Novel precision enhancement algorithm with reduced image noise in cosmic muon tomography applications

    Directory of Open Access Journals (Sweden)

    Lee Sangkyu

    2016-01-01

    Full Text Available In this paper, we present a new algorithm that improves muon-based generated tomography images with increased precision and reduced image noise applicable to the detection of nuclear materials. Cosmic muon tomography is an interrogation-based imaging technique that, over the last decade, has been frequently employed for the detection of high-Z materials. This technique exploits a magnitude of cosmic muon scattering angles in order to construct an image. The scattering angles of the muons striking the geometry of interest are non-uniform, as cosmic muons vary in energy. The randomness of the scattering angles leads to significant noise in the muon tomography image. GEANT4 is used to numerically create data on the momenta and positions of scattered muons in a predefined geometry that includes high-Z materials. The numerically generated information is then processed with the point of closest approach reconstruction method to construct a muon tomography image; statistical filters are then developed to refine the point of closest approach reconstructed images. The filtered images exhibit reduced noise and enhanced precision when attempting to identify the presence of high-Z materials. The average precision from the point of closest approach reconstruction method is 13 %; for the integrated method, 88 %. The filtered image, therefore, results in a seven-fold improvement in precision compared to the point of closest approach reconstructed image.

  8. Precise numerical results for limit cycles in the quantum three-body problem

    International Nuclear Information System (INIS)

    Mohr, R.F.; Furnstahl, R.J.; Hammer, H.-W.; Perry, R.J.; Wilson, K.G.

    2006-01-01

    The study of the three-body problem with short-range attractive two-body forces has a rich history going back to the 1930s. Recent applications of effective field theory methods to atomic and nuclear physics have produced a much improved understanding of this problem, and we elucidate some of the issues using renormalization group ideas applied to precise nonperturbative calculations. These calculations provide 11-12 digits of precision for the binding energies in the infinite cutoff limit. The method starts with this limit as an approximation to an effective theory and allows cutoff dependence to be systematically computed as an expansion in powers of inverse cutoffs and logarithms of the cutoff. Renormalization of three-body bound states requires a short range three-body interaction, with a coupling that is governed by a precisely mapped limit cycle of the renormalization group. Additional three-body irrelevant interactions must be determined to control subleading dependence on the cutoff and this control is essential for an effective field theory since the continuum limit is not likely to match physical systems (e.g., few-nucleon bound and scattering states at low energy). Leading order calculations precise to 11-12 digits allow clear identification of subleading corrections, but these corrections have not been computed

  9. Precise Localization and Formation Control of Swarm Robots via Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Han Wu

    2014-01-01

    Full Text Available Precise localization and formation control are one of the key technologies to achieve coordination and control of swarm robots, which is also currently a bottleneck for practical applications of swarm robotic systems. Aiming at overcoming the limited individual perception and the difficulty of achieving precise localization and formation, a localization approach combining dead reckoning (DR with wireless sensor network- (WSN- based methods is proposed in this paper. Two kinds of WSN localization technologies are adopted in this paper, that is, ZigBee-based RSSI (received signal strength indication global localization and electronic tag floors for calibration of local positioning. First, the DR localization information is combined with the ZigBee-based RSSI position information using the Kalman filter method to achieve precise global localization and maintain the robot formation. Then the electronic tag floors provide the robots with their precise coordinates in some local areas and enable the robot swarm to calibrate its formation by reducing the accumulated position errors. Hence, the overall performance of localization and formation control of the swarm robotic system is improved. Both of the simulation results and the experimental results on a real schematic system are given to demonstrate the success of the proposed approach.

  10. Local high precision 3D measurement based on line laser measuring instrument

    Science.gov (United States)

    Zhang, Renwei; Liu, Wei; Lu, Yongkang; Zhang, Yang; Ma, Jianwei; Jia, Zhenyuan

    2018-03-01

    In order to realize the precision machining and assembly of the parts, the geometrical dimensions of the surface of the local assembly surfaces need to be strictly guaranteed. In this paper, a local high-precision three-dimensional measurement method based on line laser measuring instrument is proposed to achieve a high degree of accuracy of the three-dimensional reconstruction of the surface. Aiming at the problem of two-dimensional line laser measuring instrument which lacks one-dimensional high-precision information, a local three-dimensional profile measuring system based on an accurate single-axis controller is proposed. First of all, a three-dimensional data compensation method based on spatial multi-angle line laser measuring instrument is proposed to achieve the high-precision measurement of the default axis. Through the pretreatment of the 3D point cloud information, the measurement points can be restored accurately. Finally, the target spherical surface is needed to make local three-dimensional scanning measurements for accuracy verification. The experimental results show that this scheme can get the local three-dimensional information of the target quickly and accurately, and achieves the purpose of gaining the information and compensating the error for laser scanner information, and improves the local measurement accuracy.

  11. Precision half-life measurement of 17F

    Science.gov (United States)

    Brodeur, M.; Nicoloff, C.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Gupta, Y. K.; Hall, M. R.; Hall, O.; Hu, J.; Kelly, J. M.; Kolata, J. J.; Long, J.; O'Malley, P.; Schultz, B. E.

    2016-02-01

    Background: The precise determination of f t values for superallowed mixed transitions between mirror nuclide are gaining attention as they could provide an avenue to test the theoretical corrections used to extract the Vu d matrix element from superallowed pure Fermi transitions. The 17F decay is particularly interesting as it proceeds completely to the ground state of 17O, removing the need for branching ratio measurements. The dominant uncertainty on the f t value of the 17F mirror transition stems from a number of conflicting half-life measurements. Purpose: A precision half-life measurement of 17F was performed and compared to previous results. Methods: The life-time was determined from the β counting of implanted 17F on a Ta foil that was removed from the beam for counting. The 17F beam was produced by transfers reaction and separated by the TwinSol facility of the Nuclear Science Laboratory of the University of Notre Dame. Results: The measured value of t1/2 new=64.402 (42) s is in agreement with several past measurements and represents one of the most precise measurements to date. In anticipation of future measurements of the correlation parameters for the decay and using the new world average t1/2 world=64.398 (61) s, we present a new estimate of the mixing ratio ρ for the mixed transition as well as the correlation parameters based on assuming Standard Model validity. Conclusions: The relative uncertainty on the new world average for the half-life is dominated by the large χ2=31 of the existing measurements. More precision measurements with different systematics are needed to remedy to the situation.

  12. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  13. What is precision medicine?

    Science.gov (United States)

    König, Inke R; Fuchs, Oliver; Hansen, Gesine; von Mutius, Erika; Kopp, Matthias V

    2017-10-01

    The term "precision medicine" has become very popular over recent years, fuelled by scientific as well as political perspectives. Despite its popularity, its exact meaning, and how it is different from other popular terms such as "stratified medicine", "targeted therapy" or "deep phenotyping" remains unclear. Commonly applied definitions focus on the stratification of patients, sometimes referred to as a novel taxonomy, and this is derived using large-scale data including clinical, lifestyle, genetic and further biomarker information, thus going beyond the classical "signs-and-symptoms" approach.While these aspects are relevant, this description leaves open a number of questions. For example, when does precision medicine begin? In which way does the stratification of patients translate into better healthcare? And can precision medicine be viewed as the end-point of a novel stratification of patients, as implied, or is it rather a greater whole?To clarify this, the aim of this paper is to provide a more comprehensive definition that focuses on precision medicine as a process. It will be shown that this proposed framework incorporates the derivation of novel taxonomies and their role in healthcare as part of the cycle, but also covers related terms. Copyright ©ERS 2017.

  14. Towards an Open Software Platform for Field Robots in Precision Agriculture

    DEFF Research Database (Denmark)

    Jensen, Kjeld; Larsen, Morten; Nielsen, Søren H

    2014-01-01

    Robotics in precision agriculture has the potential to improve competitiveness and increase sustainability compared to current crop production methods and has become an increasingly active area of research. Tractor guidance systems for supervised navigation and implement control have reached...... the market, and prototypes of field robots performing precision agriculture tasks without human intervention also exist. But research in advanced cognitive perception and behaviour that is required to enable a more efficient, reliable and safe autonomy becomes increasingly demanding due to the growing...... software complexity. A lack of collaboration between research groups contributes to the problem. Scientific publications describe methods and results from the work, but little field robot software is released and documented for others to use. We hypothesize that a common open software platform tailored...

  15. Precision Medicine in Cancer Treatment

    Science.gov (United States)

    Precision medicine helps doctors select cancer treatments that are most likely to help patients based on a genetic understanding of their disease. Learn about the promise of precision medicine and the role it plays in cancer treatment.

  16. Precision electron polarimetry

    International Nuclear Information System (INIS)

    Chudakov, E.

    2013-01-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry

  17. Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method.

    Science.gov (United States)

    Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A

    2018-02-01

    To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Superior Intraparietal Sulcus Controls the Variability of Visual Working Memory Precision.

    Science.gov (United States)

    Galeano Weber, Elena M; Peters, Benjamin; Hahn, Tim; Bledowski, Christoph; Fiebach, Christian J

    2016-05-18

    Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision over time. Although WM precision was recently associated with neural codes particularly in early sensory cortex, we have so far no understanding of the neural bases underlying the variability of WM precision, and how WM precision is preserved under high load. To fill this gap, we combined human fMRI with computational modeling of behavioral performance in a delayed color-estimation WM task. Behavioral results replicate a reduction of WM precision and an increase of precision variability under high loads (5 > 3 > 1 colors). Load-dependent BOLD signals in primary visual cortex (V1) and superior intraparietal sulcus (IPS), measured during the WM task at 2-4 s after sample onset, were modulated by individual differences in load-related changes in the variability of WM precision. Although stronger load-related BOLD increase in superior IPS was related to lower increases in precision variability, thus stabilizing WM performance, the reverse was observed for V1. Finally, the detrimental effect of load on behavioral precision and precision variability was accompanied by a load-related decline in the accuracy of decoding the memory stimuli (colors) from left superior IPS. We suggest that the superior IPS may contribute to stabilizing visual WM performance by reducing the variability of memory precision in the face of higher load. This study investigates the neural bases of capacity limitations in visual working memory by combining fMRI with cognitive modeling of behavioral performance, in human participants. It provides evidence that the superior intraparietal sulcus (IPS) is a critical brain region that influences the variability of visual working memory precision between and

  19. Precision Medicine in Gastrointestinal Pathology.

    Science.gov (United States)

    Wang, David H; Park, Jason Y

    2016-05-01

    -Precision medicine is the promise of individualized therapy and management of patients based on their personal biology. There are now multiple global initiatives to perform whole-genome sequencing on millions of individuals. In the United States, an early program was the Million Veteran Program, and a more recent proposal in 2015 by the president of the United States is the Precision Medicine Initiative. To implement precision medicine in routine oncology care, genetic variants present in tumors need to be matched with effective clinical therapeutics. When we focus on the current state of precision medicine for gastrointestinal malignancies, it becomes apparent that there is a mixed history of success and failure. -To present the current state of precision medicine using gastrointestinal oncology as a model. We will present currently available targeted therapeutics, promising new findings in clinical genomic oncology, remaining quality issues in genomic testing, and emerging oncology clinical trial designs. -Review of the literature including clinical genomic studies on gastrointestinal malignancies, clinical oncology trials on therapeutics targeted to molecular alterations, and emerging clinical oncology study designs. -Translating our ability to sequence thousands of genes into meaningful improvements in patient survival will be the challenge for the next decade.

  20. [Some reflections on evidenced-based medicine, precision medicine, and big data-based research].

    Science.gov (United States)

    Tang, J L; Li, L M

    2018-01-10

    Evidence-based medicine remains the best paradigm for medical practice. However, evidence alone is not decisions; decisions must also consider resources available and the values of people. Evidence shows that most of those treated with blood pressure-lowering, cholesterol-lowering, glucose-lowering and anti-cancer drugs do not benefit from preventing severe complications such as cardiovascular events and deaths. This implies that diagnosis and treatment in modern medicine in many circumstances is imprecise. It has become a dream to identify and treat only those few who can respond to the treatment. Precision medicine has thus come into being. Precision medicine is however not a new idea and cannot rely solely on gene sequencing as it was initially proposed. Neither is the large cohort and multi-factorial approach a new idea; in fact it has been used widely since 1950s. Since its very beginning, medicine has never stopped in searching for more precise diagnostic and therapeutic methods and already made achievements at various levels of our understanding and knowledge, such as vaccine, blood transfusion, imaging, and cataract surgery. Genetic biotechnology is not the only path to precision but merely a new method. Most genes are found only weakly associated with disease and are thus unlikely to lead to great improvement in diagnostic and therapeutic precision. The traditional multi-factorial approach by embracing big data and incorporating genetic factors is probably the most realistic way ahead for precision medicine. Big data boasts of possession of the total population and large sample size and claims correlation can displace causation. They are serious misleading concepts. Science has never had to observe the totality in order to draw a valid conclusion; a large sample size is required only when the anticipated effect is small and clinically less meaningful; emphasis on correlation over causation is equivalent to rejection of the scientific principles and methods

  1. Hardware accuracy counters for application precision and quality feedback

    Science.gov (United States)

    de Paula Rosa Piga, Leonardo; Majumdar, Abhinandan; Paul, Indrani; Huang, Wei; Arora, Manish; Greathouse, Joseph L.

    2018-06-05

    Methods, devices, and systems for capturing an accuracy of an instruction executing on a processor. An instruction may be executed on the processor, and the accuracy of the instruction may be captured using a hardware counter circuit. The accuracy of the instruction may be captured by analyzing bits of at least one value of the instruction to determine a minimum or maximum precision datatype for representing the field, and determining whether to adjust a value of the hardware counter circuit accordingly. The representation may be output to a debugger or logfile for use by a developer, or may be output to a runtime or virtual machine to automatically adjust instruction precision or gating of portions of the processor datapath.

  2. A precise clock distribution network for MRPC-based experiments

    International Nuclear Information System (INIS)

    Wang, S.; Cao, P.; Shang, L.; An, Q.

    2016-01-01

    In high energy physics experiments, the MRPC (Multi-Gap Resistive Plate Chamber) detectors are widely used recently which can provide higher-resolution measurement for particle identification. However, the application of MRPC detectors leads to a series of challenges in electronics design with large number of front-end electronic channels, especially for distributing clock precisely. To deal with these challenges, this paper presents a universal scheme of clock transmission network for MRPC-based experiments with advantages of both precise clock distribution and global command synchronization. For precise clock distributing, the clock network is designed into a tree architecture with two stages: the first one has a point-to-multipoint long range bidirectional distribution with optical channels and the second one has a fan-out structure with copper link inside readout crates. To guarantee the precision of clock frequency or phase, the r-PTP (reduced Precision Time Protocol) and the DDMTD (digital Dual Mixer Time Difference) methods are used for frequency synthesis, phase measurement and adjustment, which is implemented by FPGA (Field Programmable Gate Array) in real-time. In addition, to synchronize global command execution, based upon this clock distribution network, synchronous signals are coded with clock for transmission. With technique of encoding/decoding and clock data recovery, signals such as global triggers or system control commands, can be distributed to all front-end channels synchronously, which greatly simplifies the system design. The experimental results show that both the clock jitter (RMS) and the clock skew can be less than 100 ps.

  3. Impacts of the precision agricultural technologies in Iran: An analysis experts' perception & their determinants

    Directory of Open Access Journals (Sweden)

    Somayeh Tohidyan Far

    2018-03-01

    Full Text Available Nowadays agricultural methods developments that are productively, economically, environmentally and socially sustainable are required immediately. The concept of precision agriculture is becoming an attractive idea for managing natural resources and realizing modern sustainable agricultural development. The purpose of this study was to investigate factors influencing impacts of precision agriculture from the viewpoints of Boushehr Province experts. The research method was a cross sectional survey and multi-stage random sampling was used to collect data from 115 experts in Boushehr province. According to the results, experts found underground and surface waters conservation, rural areas development, increase of productivity and increasing income as the most important impacts of precision agricultural technologies. Experts’ attitudes indicate their positive view toward these kinds of impacts. Also behavioral attitude has the most effect on impacts.

  4. [Progress in precision medicine: a scientific perspective].

    Science.gov (United States)

    Wang, B; Li, L M

    2017-01-10

    Precision medicine is a new strategy for disease prevention and treatment by taking into account differences in genetics, environment and lifestyles among individuals and making precise diseases classification and diagnosis, which can provide patients with personalized, targeted prevention and treatment. Large-scale population cohort studies are fundamental for precision medicine research, and could produce best evidence for precision medicine practices. Current criticisms on precision medicine mainly focus on the very small proportion of benefited patients, the neglect of social determinants for health, and the possible waste of limited medical resources. In spite of this, precision medicine is still a most hopeful research area, and would become a health care practice model in the future.

  5. Agricultural experts’ attitude towards precision agriculture: Evidence from Guilan Agricultural Organization, Northern Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Sadegh Allahyari

    2016-09-01

    Full Text Available Identifying factors that influence the attitudes of agricultural experts regarding precision agriculture plays an important role in developing, promoting and establishing precision agriculture. The aim of this study was to identify factors affecting the attitudes of agricultural experts regarding the implementation of precision agriculture. A descriptive research design was employed as the research method. A research-made questionnaire was used to examine the agricultural experts’ attitude toward precision agriculture. Internal consistency was demonstrated with a coefficient alpha of 0.87, and the content and face validity of the instrument was confirmed by a panel of experts. The results show that technical, economic and accessibility factors accounted for 55% of the changes in attitudes towards precision agriculture. The findings revealed that there were no significant differences between participants in terms of gender, field of study, extension education, age, experience, organizational position and attitudes, while education levels had a significant effect on the respondent’s attitudes.

  6. MRPC-PET: A new technique for high precision time and position measurements

    International Nuclear Information System (INIS)

    Doroud, K.; Hatzifotiadou, D.; Li, S.; Williams, M.C.S.; Zichichi, A.; Zuyeuski, R.

    2011-01-01

    The purpose of this paper is to consider a new technology for medical diagnosis: the MRPC-PET. This technology allows excellent time resolution together with 2-D position information thus providing a fundamental step in this field. The principle of this method is based on the Multigap Resistive Plate Chamber (MRPC) capable of high precision time measurements. We have previously found that the route to precise timing is differential readout (this requires matching anode and cathode strips); thus crossed strip readout schemes traditionally used for 2-D readout cannot be exploited. In this paper we consider the time difference from the two ends of the strip to provide a high precision measurement along the strip; the average time gives precise timing. The MRPC-PET thus provides a basic step in the field of medical technology: excellent time resolution together with 2-D position measurement.

  7. Modeling and control of precision actuators

    CERN Document Server

    Kiong, Tan Kok

    2013-01-01

    IntroductionGrowing Interest in Precise ActuatorsTypes of Precise ActuatorsApplications of Precise ActuatorsNonlinear Dynamics and ModelingHysteresisCreepFrictionForce RipplesIdentification and Compensation of Preisach Hysteresis in Piezoelectric ActuatorsSVD-Based Identification and Compensation of Preisach HysteresisHigh-Bandwidth Identification and Compensation of Hysteretic Dynamics in Piezoelectric ActuatorsConcluding RemarksIdentification and Compensation of Frict

  8. Development of a Method to Assess the Precision Of the z-axis X-ray Beam Collimation in a CT Scanner

    Science.gov (United States)

    Kim, Yon-Min

    2018-05-01

    Generally X-ray equipment specifies the beam collimator for the accuracy measurement as a quality control item, but the computed tomography (CT) scanner with high dose has no collimator accuracy measurement item. If the radiation dose is to be reduced, an important step is to check if the beam precisely collimates at the body part for CT scan. However, few ways are available to assess how precisely the X-ray beam is collimated. In this regard, this paper provides a way to assess the precision of z-axis X-ray beam collimation in a CT scanner. After the image plate cassette had been exposed to the X-ray beam, the exposed width was automatically detected by using a computer program developed by the research team to calculate the difference between the exposed width and the imaged width (at isocenter). The result for the precision of z-axis X-ray beam collimation showed that the exposed width was 3.8 mm and the overexposure was high at 304% when a narrow beam of a 1.25 mm imaged width was used. In this study, the precision of the beam collimation of the CT scanner, which is frequently used for medical services, was measured in a convenient way by using the image plate (IP) cassette.

  9. High-precision thickness measurements using beta backscatter

    International Nuclear Information System (INIS)

    Heckman, R.V.

    1978-11-01

    A two-axis, automated fixture for use with a high-intensity Pm-147 source and a photomultiplier-scintillation beta-backscatter probe for making thickness measurements has been designed and built. A custom interface was built to connect the system to a minicomputer, and software was written to position the tables, control the probe, and make the measurements. Measurements can be made in less time with much greater precision than by the method previously used

  10. Study on Maritime Logistics Warehousing Center Model and Precision Marketing Strategy Optimization Based on Fuzzy Method and Neural Network Model

    Directory of Open Access Journals (Sweden)

    Xiao Kefeng

    2017-08-01

    Full Text Available The bulk commodity, different with the retail goods, has a uniqueness in the location selection, the chosen of transportation program and the decision objectives. How to make optimal decisions in the facility location, requirement distribution, shipping methods and the route selection and establish an effective distribution system to reduce the cost has become a burning issue for the e-commerce logistics, which is worthy to be deeply and systematically solved. In this paper, Logistics warehousing center model and precision marketing strategy optimization based on fuzzy method and neural network model is proposed to solve this problem. In addition, we have designed principles of the fuzzy method and neural network model to solve the proposed model because of its complexity. Finally, we have solved numerous examples to compare the results of lingo and Matlab, we use Matlab and lingo just to check the result and to illustrate the numerical example, we can find from the result, the multi-objective model increases logistics costs and improves the efficiency of distribution time.

  11. Analysis of the accuracy of certain methods used for measuring very low reactivities; Analyse de la precision de certaines methodes de mesure de tres basses reactivites

    Energy Technology Data Exchange (ETDEWEB)

    Valat, J; Stern, T E

    1964-07-01

    The rapid measurement of anti-reactivities, in particular very low ones (i.e. a few tens of {beta}) appears to be an interesting method for the automatic start-up a reactor and its optimisation. With this in view, the present report explores the various methods studied essentially from the point of view of the time required for making the measurement with a given statistical accuracy, especially as far as very low activities are concerned. The statistical analysis is applied in turn to: the methods for the natural background noise (auto-correlation and spectral density); the sinusoidal excitation methods for the reactivity or the source, with synchronous detection ; the periodic source excitation method using pulsed neutrons. Finally, the statistical analysis leads to the suggestion of a new method of source excitation using neutronic random square waves combined with an intercorrelation between the random excitation and the resulting output. (authors) [French] La mesure rapide des antireactivites, en particulier celle des tres basses (soit quelques dizaines de {beta}), apparait comme une voie interessante pour le demarrage automatique d'un reacteur et son optimalisation. Dans cette optique, le present rapport explore diverses methodes etudiees essentiellement sous l'angle de la duree de mesure necessaire a une precision relative statistique donnee, plus particulierement en ce qui concerne les tres basses reactivites. L'analyse statistique porte successivement sur: les methodes du bruit de fond naturel (autocorrelation et densite spectrale); les methodes d'excitation sinusoidale de reactivite ou de source, avec detection synchrone; la methode d'excitation periodique de source par neutrons pulses. Enfin l'analyse statistique amene a proposer une methode nouvelle d'excitation de source par creneaux neutroniques aleatoires alliee a une intercorrelation entre l'excitation aleatoire et la sortie resultante. (auteurs)

  12. Validated TLC-densitometric analysis for determination of carotenoids in fancy carp (Cyprinus carpio serum and the application for pharmacokinetic parameter assessment

    Directory of Open Access Journals (Sweden)

    Bundit Yuangsoi

    2008-09-01

    Full Text Available A densitometric Thin-layer Chromatographic (TLC method of carotenoids such as astaxanthin, lutein, and B-carotene have been established and validated for quantitative determination of carotenoids in fancy carp serum. This study can be used in the evaluation of pharmacokinetic parameters of carotenoids in fancy carp serum. Analyses of carotenoids were performed on TLC glass plates pre-coated with silica gel 60 as the stationary phase. Linear ascending development was carried out in a twin trough glass chamber saturated with mobile phase consisting of petroleum ether-diethyl ether-acetone(75:15:10, v/v/v at a temperature of 25±2oC. TLC scanner was used for spectrodensitometric scanning and analysis inabsorbance mode at 450 nm. The system was found to give compact spots for astaxanthin, lutein, and b-carotene (Rf values of 0.21, 0.17 and 0.97, respectively. The method was validated for linearity, precision, accuracy, LOD, LOQ and HORRAT value. The linear regression analysis data of astaxanthin, lutein, and b-carotene for the calibration plots showed a good linear relationship with r2 = 0.999, 0.998 and 0.998, respectively, in a concentration range of 0.01-6.50 ug/spot with respect to the peak area. Precision (% RSDr of astaxanthin, lutein, and b-carotene was 2.93, 3.34, and 2.61, respectively. The limit of detection (LOD was 0.011, 0.023 and 0.026 μg/spot, respectively. The additionally limit of quantization (LOQ was 0.036, 0.075 and 0.085 μg/spot, respectively. The percent recoveries of astaxanthin, lutein, and b-carotene spiked to sample blank showed an average of percent recoveries for astaxanthin (0.3-2.0 mg/ml of 91.70%, for lutein(0.2-3.0 mg/ul of 90.47%, and for b-carotene (0.1-1.0 mg/ul of 102.25%. In all carotenoids, the HORRAT values were below the critical value. Therefore, this method enables simple, rapid, economical and precise quantitative determination of carotenoids in fancy carp serum for evaluated pharmacokinetic parameters

  13. Precision alignment of permanent-magnet drift tubes

    International Nuclear Information System (INIS)

    Liska, D.J.; Dauelsberg, L.B.; Spalek, G.

    1986-01-01

    The Lawrence Berkeley National Laboratory (LBNL) technique of drift-tube alignment has been resurrected at Los Alamos for the precision alignment of 1-cm-bore drift tubes that carry high-gradient rare-earth-cobalt quadrupole. Because the quadrupole cannot be switched off, this technique is not applicable to a drift-tube assembly, but tests indicate that individual magnetic centers can be detected with a precision of +- 0.003 mm. Methods of transferring this information to machined alignment flats on the sides of the drift-tube body are discussed. With measurements of drift tubes designed for a 100-mA. 425-MHz drift-tube linac, we have detected offsets between the geometric and magnetic axes of up to +- 0.05 mm following final assembly and welding. This degree of offset is serious if not accommodated, because it represents the entire alignment tolerance for the 40-cell tank. The measurement equipment and technique are described

  14. Observing exoplanet populations with high-precision astrometry

    Science.gov (United States)

    Sahlmann, Johannes

    2012-06-01

    This thesis deals with the application of the astrometry technique, consisting in measuring the position of a star in the plane of the sky, for the discovery and characterisation of extra-solar planets. It is feasible only with a very high measurement precision, which motivates the use of space observatories, the development of new ground-based astronomical instrumentation and of innovative data analysis methods: The study of Sun-like stars with substellar companions using CORALIE radial velocities and HIPPARCOS astrometry leads to the determination of the frequency of close brown dwarf companions and to the discovery of a dividing line between massive planets and brown dwarf companions; An observation campaign employing optical imaging with a very large telescope demonstrates sufficient astrometric precision to detect planets around ultra-cool dwarf stars and the first results of the survey are presented; Finally, the design and initial astrometric performance of PRIMA, ! a new dual-feed near-infrared interferometric observing facility for relative astrometry is presented.

  15. Elimination of chloride ions in the analytical method for the precise determination of plutonium or uranium using titanous ions as reductant

    International Nuclear Information System (INIS)

    Nicol-Rostaing, C.; Wagner, J.F.

    1991-01-01

    The Corpel and Regnaud's procedure for the precise determination of uranium and plutonium, using titanous (III) chloride as reductant has been modified in order to be compatible with the throwing out standards in nuclear plants. The removal of chloride reagents has been studied. On the original method, there are two: titanous chloride and ferric chloride. We propose titanous sulphate and ferric nitrate as substitution reagents. As commercial titanous sulphate can't be found, an easy procedure has been set and described with storage conditions: experimental conditions have been optimized and adapted for manufacturing on a laboratory scale [fr

  16. Occurrence of aflatoxins in peanuts and peanut products determined by liquid chromatography with fluorescence detection

    Directory of Open Access Journals (Sweden)

    Stojanovska-Dimzoska Biljana

    2013-01-01

    Full Text Available Liquid chromatography with fluorescence detection using immunoaffinity column clean-up was a method described for determination of aflatoxins (AFB1, AFB2, AFG1 and AFG2 in peanuts and peanut based products. The validation of the procedure was performed. Good coefficient of correlation was found for all aflatoxins in the range of 0.9993-0.9999. Limit of detection (LOD and limit of quantification (LOQ ranged from 0.003-0.005 mg/kg and 0.009-0.023 mg/kg, respectively, which was acceptable. The mean recovery for total aflatoxins was 88.21%. The method also showed acceptable precision values in the range of 0.171-2.626% at proposed concentration levels for all four aflatoxins. RSDR values (within laboratory reproducibility calculated from the results showed good correlation between two analysts for all aflatoxins and they ranged from 4.93-11.87%. The developed method was applied for the determination of aflatoxins in 27 samples of peanuts and peanut based products. The results showed that 21 peanut samples (77.7% were below LOD of the method. Three samples had positive results over the MRL. There was one extreme value recorded for the total aflatoxins in peanut (289.2 mg/kg and two peanut based products, peanut snack and peanut, with total content of aflatoxins being 16.3 mg/kg and 8.0 mg/kg, respectively. The obtained results demonstrated that the procedure was suitable for the de­termination of aflatoxins in peanuts and peanut based products and it could be implemented for the routine analysis.

  17. Precision wildlife medicine: applications of the human-centred precision medicine revolution to species conservation.

    Science.gov (United States)

    Whilde, Jenny; Martindale, Mark Q; Duffy, David J

    2017-05-01

    The current species extinction crisis is being exacerbated by an increased rate of emergence of epizootic disease. Human-induced factors including habitat degradation, loss of biodiversity and wildlife population reductions resulting in reduced genetic variation are accelerating disease emergence. Novel, efficient and effective approaches are required to combat these epizootic events. Here, we present the case for the application of human precision medicine approaches to wildlife medicine in order to enhance species conservation efforts. We consider how the precision medicine revolution, coupled with the advances made in genomics, may provide a powerful and feasible approach to identifying and treating wildlife diseases in a targeted, effective and streamlined manner. A number of case studies of threatened species are presented which demonstrate the applicability of precision medicine to wildlife conservation, including sea turtles, amphibians and Tasmanian devils. These examples show how species conservation could be improved by using precision medicine techniques to determine novel treatments and management strategies for the specific medical conditions hampering efforts to restore population levels. Additionally, a precision medicine approach to wildlife health has in turn the potential to provide deeper insights into human health and the possibility of stemming and alleviating the impacts of zoonotic diseases. The integration of the currently emerging Precision Medicine Initiative with the concepts of EcoHealth (aiming for sustainable health of people, animals and ecosystems through transdisciplinary action research) and One Health (recognizing the intimate connection of humans, animal and ecosystem health and addressing a wide range of risks at the animal-human-ecosystem interface through a coordinated, collaborative, interdisciplinary approach) has great potential to deliver a deeper and broader interdisciplinary-based understanding of both wildlife and human

  18. Nanomaterials for Cancer Precision Medicine.

    Science.gov (United States)

    Wang, Yilong; Sun, Shuyang; Zhang, Zhiyuan; Shi, Donglu

    2018-04-01

    Medical science has recently advanced to the point where diagnosis and therapeutics can be carried out with high precision, even at the molecular level. A new field of "precision medicine" has consequently emerged with specific clinical implications and challenges that can be well-addressed by newly developed nanomaterials. Here, a nanoscience approach to precision medicine is provided, with a focus on cancer therapy, based on a new concept of "molecularly-defined cancers." "Next-generation sequencing" is introduced to identify the oncogene that is responsible for a class of cancers. This new approach is fundamentally different from all conventional cancer therapies that rely on diagnosis of the anatomic origins where the tumors are found. To treat cancers at molecular level, a recently developed "microRNA replacement therapy" is applied, utilizing nanocarriers, in order to regulate the driver oncogene, which is the core of cancer precision therapeutics. Furthermore, the outcome of the nanomediated oncogenic regulation has to be accurately assessed by the genetically characterized, patient-derived xenograft models. Cancer therapy in this fashion is a quintessential example of precision medicine, presenting many challenges to the materials communities with new issues in structural design, surface functionalization, gene/drug storage and delivery, cell targeting, and medical imaging. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. An aberrant precision account of autism.

    Directory of Open Access Journals (Sweden)

    Rebecca P Lawson

    2014-05-01

    Full Text Available Autism is a neurodevelopmental disorder characterised by problems with social-communication, restricted interests and repetitive behaviour. A recent and controversial article presented a compelling normative explanation for the perceptual symptoms of autism in terms of a failure of Bayesian inference (Pellicano and Burr, 2012. In response, we suggested that when Bayesian interference is grounded in its neural instantiation – namely, predictive coding – many features of autistic perception can be attributed to aberrant precision (or beliefs about precision within the context of hierarchical message passing in the brain (Friston et al., 2013. Here, we unpack the aberrant precision account of autism. Specifically, we consider how empirical findings – that speak directly or indirectly to neurobiological mechanisms – are consistent with the aberrant encoding of precision in autism; in particular, an imbalance of the precision ascribed to sensory evidence relative to prior beliefs.

  20. NCI and the Precision Medicine Initiative®

    Science.gov (United States)

    NCI's activities related to precision medicine focuses on new and expanded precision medicine clinical trials; mechanisms to overcome drug resistance to cancer treatments; and developing a shared digital repository of precision medicine trials data.

  1. Improved Visual Hook Capturing and Tracking for Precision Hoisting of Tower Crane

    Directory of Open Access Journals (Sweden)

    Yanming Li

    2013-01-01

    Full Text Available To maintain safe operation of the tower crane, it is important to monitor the activities of the hook system. Visual monitoring and image recognition are the optimum methods for crane hook tracking and precision hoisting. High real-time performance and low computation requirements are required for tower crane hook capturing and tracking system which is implemented on the embedded Advanced RISC Machines (ARM processor or Microcontrol Unit (MCU. Using the lift rope of a tower crane as the target object, a new high-performance hook tracking method suitble for ARM processor or MCU applications is presented. The features of the lifting process are analyzed, and an improved progressive probabilistic Hough transform (IPPHT algorithm is proposed which canreduce capturing time by up to 80%. Combining color histogram with a binary search algorithm, an adaptive zooming method for precise hoisting is presented. Using this method the optimum zoom scale can be achieved within a few iterations.

  2. Application of Taguchi method to optimization of surface roughness during precise turning of NiTi shape memory alloy

    Science.gov (United States)

    Kowalczyk, M.

    2017-08-01

    This paper describes the research results of surface quality research after the NiTi shape memory alloy (Nitinol) precise turning by the tools with edges made of polycrystalline diamonds (PCD). Nitinol, a nearly equiatomic nickel-titanium shape memory alloy, has wide applications in the arms industry, military, medicine and aerospace industry, and industrial robots. Due to their specific properties NiTi alloys are known to be difficult-to-machine materials particularly by using conventional techniques. The research trials were conducted for three independent parameters (vc, f, ap) affecting the surface roughness were analyzed. The choice of parameter configurations were performed by factorial design methods using orthogonal plan type L9, with three control factors, changing on three levels, developed by G. Taguchi. S/N ratio and ANOVA analyses were performed to identify the best of cutting parameters influencing surface roughness.

  3. Precise measurement of tau lifetime in ALEPH experiment at the LEP; Mesure precise de la duree de vie du tau dans l`experience ALEPH au LEP

    Energy Technology Data Exchange (ETDEWEB)

    Park, I

    1995-02-01

    A new method is presented for the measurement of the tau lifetime using tau decays to hadrons. Precise measurements ({sigma} {approx} 20{mu}m) of impact parameters (d{sub o} and z{sub o}) of charged tracks using full vertex detector informations allow the reconstruction of the 3-dimensional point of minimum approach of the track to the beam axis. On the other hand, it is shown that an axis perpendicular to the tau axis can be precisely determined ({sigma} {approx} 10 mrad) in the hadronic-hadronic {tau}{sup +}{tau}{sup -} decay events using kinematics and the back-to-back nature of tau pairs in e{sup +}e{sup -} colliders. Combination of both quantities yields a generalized IPS relation in 3D space which is not affected by the beam size nor by the tau direction uncertainty. The experimental resolution can be fitted together with lifetime due to the small smearing. The method allows, therefore, a self-consistent and self-calibrating analysis of tau lifetime. The method has good stability against systematical uncertainties like tracking resolution, non-gaussian tails, etc...The method has been applied to the data collected by the ALEPH detector at LEP in 1992. From 2840 {tau}{sup +} + {sup {tau}-} {yields} hadron + hadron (1-1) decay events and 794 hadron + 3 hadrons (1-3) decay events, the tau lifetimes of 292.9 {+-} 5.9 {+-} 2.7fs and 284.6 {+-} 11.9 {+-} 5.1fs are obtained respectively. The combined {tau} lifetimes is 290.8 {+-} 5.3 {+-} 2.7fs. Statistical uncertainty corresponds to 1.1/{radical}N{sub {tau}}{tau}. This result has low statistical correlation with other precision methods. (author). 70 refs., 80 figs., 21 tabs., 7 ann.

  4. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    Science.gov (United States)

    Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.

    2007-09-01

    We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.

  5. Ethical considerations of neuro-oncology trial design in the era of precision medicine.

    Science.gov (United States)

    Gupta, Saksham; Smith, Timothy R; Broekman, Marike L

    2017-08-01

    The field of oncology is currently undergoing a paradigm shift. Advances in the understanding of tumor biology and in tumor sequencing technology have contributed to the shift towards precision medicine, the therapeutic framework of targeting the individual oncogenic changes each tumor harbors. The success of precision medicine therapies, such as targeted kinase inhibitors and immunotherapies, in other cancers have motivated studies in brain cancers. The high specificity and cost of these therapies also encourage a shift in clinical trial design away from randomized control trials towards smaller, more exclusive early phase clinical trials. While these new trials advance the clinical application of increasingly precise and individualized therapies, their design brings ethical challenges . We review the pertinent ethical considerations for clinical trials of precision medicine in neuro-oncology and discuss methods to protect patients in this new era of trial design.

  6. Precision tests of quantum chromodynamics and the standard model

    International Nuclear Information System (INIS)

    Brodsky, S.J.; Lu, H.J.

    1995-06-01

    The authors discuss three topics relevant to testing the Standard Model to high precision: commensurate scale relations, which relate observables to each other in perturbation theory without renormalization scale or scheme ambiguity, the relationship of compositeness to anomalous moments, and new methods for measuring the anomalous magnetic and quadrupole moments of the W and Z

  7. Machine vision for high-precision volume measurement applied to levitated containerless material processing

    International Nuclear Information System (INIS)

    Bradshaw, R.C.; Schmidt, D.P.; Rogers, J.R.; Kelton, K.F.; Hyers, R.W.

    2005-01-01

    By combining the best practices in optical dilatometry with numerical methods, a high-speed and high-precision technique has been developed to measure the volume of levitated, containerlessly processed samples with subpixel resolution. Containerless processing provides the ability to study highly reactive materials without the possibility of contamination affecting thermophysical properties. Levitation is a common technique used to isolate a sample as it is being processed. Noncontact optical measurement of thermophysical properties is very important as traditional measuring methods cannot be used. Modern, digitally recorded images require advanced numerical routines to recover the subpixel locations of sample edges and, in turn, produce high-precision measurements

  8. Elekta Precise Table characteristics of IGRT remote table positioning

    International Nuclear Information System (INIS)

    Riis, Hans L.; Zimmermann, Sune J.

    2009-01-01

    Cone beam CT is a powerful tool to ensure an optimum patient positioning in radiotherapy. When cone beam CT scan of a patient is acquired, scan data of the patient are compared and evaluated against a reference image set and patient position offset is calculated. Via the linac control system, the patient is moved to correct for position offset and treatment starts. This procedure requires a reliable system for movement of patient. In this work we present a new method to characterize the reproducibility, linearity and accuracy in table positioning. The method applies to all treatment tables used in radiotherapy. Material and methods. The table characteristics are investigated on our two recent Elekta Synergy Platforms equipped with Precise Table installed in a shallow pit concrete cavity. Remote positioning of the table uses the auto set-up (ASU) feature in the linac control system software Desktop Pro R6.1. The ASU is used clinically to correct for patient positioning offset calculated via cone beam CT (XVI)-software. High precision steel rulers and a USB-microscope has been used to detect the relative table position in vertical, lateral and longitudinal direction. The effect of patient is simulated by applying external load on the iBEAM table top. For each table position an image is exposed of the ruler and display values of actual table position in the linac control system is read out. The table is moved in full range in lateral direction (50 cm) and longitudinal direction (100 cm) while in vertical direction a limited range is used (40 cm). Results and discussion. Our results show a linear relation between linac control system read out and measured position. Effects of imperfect calibration are seen. A reproducibility within a standard deviation of 0.22 mm in lateral and longitudinal directions while within 0.43 mm in vertical direction has been observed. The usage of XVI requires knowledge of the characteristics of remote table positioning. It is our opinion

  9. Examination of tumor diameter measurement precision by RECIST

    International Nuclear Information System (INIS)

    Goto, Masami; Ino, Kenji; Akahane, Masaaki

    2007-01-01

    Image evaluation with Response Evaluation Criteria in Solid Tumors (RECIST) evaluates the change in a measurable lesion as determined by ruler or micrometer caliper. However, there is no definition of the conditions thought to influence the precision of measurement. We therefore examined the effects on measurement precision by changing image amplification, window width (WW), window level (WL), and time phase. Moreover, to determine response rate, one-dimensional evaluation with RECIST was compared with the two-dimensional evaluation of World Health Organization (WHO) for a hepatocellular carcinoma. The results of measuring the object lesion for measured value variation were as follows. Under image conditions of 1 time expansion/WW 150/WL 100 was (4.92±1.94)%. Under image conditions of 1 time expansion/WW 350/WL 75 was (4.42±1.70)%. Under image conditions of 4 times expansion/WW 150/WL 100 was (2.52±0.82)%. Under image conditions of 4 times expansion/WW 350/WL 75 was (2.83±1.10)%. When an image was enlarged to 4 times, precision doubled. There was no a difference in comparing RECIST to WHO in terms of response rate. Thus the best method was considered to be RECIST because of its convenience. (author)

  10. Numerical precision control and GRACE

    International Nuclear Information System (INIS)

    Fujimoto, J.; Hamaguchi, N.; Ishikawa, T.; Kaneko, T.; Morita, H.; Perret-Gallix, D.; Tokura, A.; Shimizu, Y.

    2006-01-01

    The control of the numerical precision of large-scale computations like those generated by the GRACE system for automatic Feynman diagram calculations has become an intrinsic part of those packages. Recently, Hitachi Ltd. has developed in FORTRAN a new library HMLIB for quadruple and octuple precision arithmetic where the number of lost-bits is made available. This library has been tested with success on the 1-loop radiative correction to e + e - ->e + e - τ + τ - . It is shown that the approach followed by HMLIB provides an efficient way to track down the source of numerical significance losses and to deliver high-precision results yet minimizing computing time

  11. A High-Precision RF Time-of-Flight Measurement Method based on Vernier Effect for Localization of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sang-il KO

    2011-12-01

    Full Text Available This paper presents the fundamental principles of a high-precision RF time-of-flight (ToF measurement method based on the vernier effect, which enables the improvement of time measurement resolution, for accurate distance measurement between sensor nodes in wireless sensor networks. Similar to the two scales of the vernier caliper, two heterogeneous clocks are employed to induce a new virtual time resolution that is much finer than clocks’ intrinsic time resolution. Consecutive RF signal transmission and sensing using two heterogeneous clocks generates a unique sensing pattern for the RF ToF, so that the size of the RF ToF can be estimated by comparing the measured sensing pattern with the predetermined sensing patterns for the RF ToF. RF ToF measurement experiments using this heterogeneous clock system, which has low operating frequencies of several megahertz, certify the proposed RF ToF measurement method through the evaluation of the measured sensing patterns with respect to an RF round-trip time of several nanoseconds.

  12. Thorium spectrophotometric analysis with high precision

    International Nuclear Information System (INIS)

    Palmieri, H.E.L.

    1983-06-01

    An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using disodium ethylenediaminetetraacetate (EDTA) solution and alizarin S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer program. (author)

  13. Precision oncology: origins, optimism, and potential.

    Science.gov (United States)

    Prasad, Vinay; Fojo, Tito; Brada, Michael

    2016-02-01

    Imatinib, the first and arguably the best targeted therapy, became the springboard for developing drugs aimed at molecular targets deemed crucial to tumours. As this development unfolded, a revolution in the speed and cost of genetic sequencing occurred. The result--an armamentarium of drugs and an array of molecular targets--set the stage for precision oncology, a hypothesis that cancer treatment could be markedly improved if therapies were guided by a tumour's genomic alterations. Drawing lessons from the biological basis of cancer and recent empirical investigations, we take a more measured view of precision oncology's promise. Ultimately, the promise is not our concern, but the threshold at which we declare success. We review reports of precision oncology alongside those of precision diagnostics and novel radiotherapy approaches. Although confirmatory evidence is scarce, these interventions have been widely endorsed. We conclude that the current path will probably not be successful or, at a minimum, will have to undergo substantive adjustments before it can be successful. For the sake of patients with cancer, we hope one form of precision oncology will deliver on its promise. However, until confirmatory studies are completed, precision oncology remains unproven, and as such, a hypothesis in need of rigorous testing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Precision and accuracy of β gauge for aerosol mass determinations

    International Nuclear Information System (INIS)

    Courtney, W.J.; Shaw, R.W.; Dzabay, T.G.

    1982-01-01

    Results of an experimental determination of the precision and the accuracy of a β-ray attenuation method for measurement of aerosol mass are presented. The instrumental precision for a short-term experiment was 25 μg for a 6.5-cm 2 deposit collected on approximately 1 mg/cm 2 Teflon filters; for a longer-term experiment the precision was 27 μg. The precision of the gravimetric determinations of aerosol deposits was 22 μg for Teflon filters weighed to 1 μg. Filter reorientation and air density changes that were able adversely to affect the β-ray attenuation results are discussed. β-ray attenuation results are in good agreement with gravimetric measurements on the same filter-collected aerosols. Using dichotomous samplers in Durham, NC, we collected 136 aerosol samples on Teflon filters in two size ranges. A regression line was calculated implicitly assuming errors in both measurements of mass. The 90% confidence intervals lay within 21 μg of the regression line for mean fine fraction aerosol mass loadings of 536 μg and within 19 μg of the regression line for mean coarse fraction aerosol mass loadings of 349 μg. Any bias between gravimetric and β-gauge mass measurements was found to be less than 5%

  15. AMCP Partnership Forum: Managing Care in the Wave of Precision Medicine.

    Science.gov (United States)

    2018-05-23

    Precision medicine, the customization of health care to an individual's genetic profile while accounting for biomarkers and lifestyle, has increasingly been adopted by health care stakeholders to guide the development of treatment options, improve treatment decision making, provide more patient-centered care, and better inform coverage and reimbursement decisions. Despite these benefits, key challenges prevent its broader use and adoption. On December 7-8, 2017, the Academy of Managed Care Pharmacy convened a group of stakeholders to discuss these challenges and provide recommendations to facilitate broader adoption and use of precision medicine across health care settings. These stakeholders represented the pharmaceutical industry, clinicians, patient advocacy, private payers, device manufacturers, health analytics, information technology, academia, and government agencies. Throughout the 2-day forum, participants discussed evidence requirements for precision medicine, including consistent ways to measure the utility and validity of precision medicine tests and therapies, limitations of traditional clinical trial designs, and limitations of value assessment framework methods. They also highlighted the challenges with evidence collection and data silos in precision medicine. Interoperability within and across health systems is hindering clinical advancements. Current medical coding systems also cannot account for the heterogeneity of many diseases, preventing health systems from having a complete understanding of their patient population to inform resource allocation. Challenges faced by payers, such as evidence limitations, to inform coverage and reimbursement decisions in precision medicine, as well as legal and regulatory barriers that inhibit more widespread data sharing, were also identified. While a broad range of perspectives was shared throughout the forum, participants reached consensus across 2 overarching areas. First, there is a greater need for common

  16. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  17. Assessment of Sr-90 in water samples: precision and accuracy

    International Nuclear Information System (INIS)

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P.

    2017-01-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The 90 Sr was one of the elements released into the environment. The 90 Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of 90 Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using 90 Sr standard solutions. (author)

  18. Precise optical observation of 0.5-GPa shock waves in condensed materials

    Science.gov (United States)

    Nagayama, Kunihito; Mori, Yasuhito

    1999-06-01

    Precision optical observation method was developed to study impact-generated high-pressure shock waves in condensed materials. The present method makes it possible to sensitively detect the shock waves of the relatively low shock stress around 0.5 GPa. The principle of the present method is based on the use of total internal reflection by triangular prisms placed on the free surface of a target assembly. When a plane shock wave arrives at the free surface, the light reflected from the prisms extinguishes instantaneously. The reason is that the total internal reflection changes to the reflection depending on micron roughness of the free surface after the shock arrival. The shock arrival at the bottom face of the prisms can be detected here by two kinds of methods, i.e., a photographic method and a gauge method. The photographic method is an inclined prism method of using a high-speed streak camera. The shock velocity and the shock tilt angle can be estimated accurately from an obtained streak photograph. While in the gauge method, an in-material PVDF stress gauge is combined with an optical prism-pin. The PVDF gauge records electrically the stress profile behind the shockwave front, and the Hugoniot data can be precisely measured by combining the prism pin with the PVDF gauge.

  19. Precise measurement of tau lifetime in ALEPH experiment at the LEP

    International Nuclear Information System (INIS)

    Park, I.

    1995-02-01

    A new method is presented for the measurement of the tau lifetime using tau decays to hadrons. Precise measurements (σ ∼ 20μm) of impact parameters (d o and z o ) of charged tracks using full vertex detector informations allow the reconstruction of the 3-dimensional point of minimum approach of the track to the beam axis. On the other hand, it is shown that an axis perpendicular to the tau axis can be precisely determined (σ ∼ 10 mrad) in the hadronic-hadronic τ + τ - decay events using kinematics and the back-to-back nature of tau pairs in e + e - colliders. Combination of both quantities yields a generalized IPS relation in 3D space which is not affected by the beam size nor by the tau direction uncertainty. The experimental resolution can be fitted together with lifetime due to the small smearing. The method allows, therefore, a self-consistent and self-calibrating analysis of tau lifetime. The method has good stability against systematical uncertainties like tracking resolution, non-gaussian tails, etc...The method has been applied to the data collected by the ALEPH detector at LEP in 1992. From 2840 τ + + τ- → hadron + hadron (1-1) decay events and 794 hadron + 3 hadrons (1-3) decay events, the tau lifetimes of 292.9 ± 5.9 ± 2.7fs and 284.6 ± 11.9 ± 5.1fs are obtained respectively. The combined τ lifetimes is 290.8 ± 5.3 ± 2.7fs. Statistical uncertainty corresponds to 1.1/√N τ τ. This result has low statistical correlation with other precision methods. (author). 70 refs., 80 figs., 21 tabs., 7 ann

  20. Accuracy and precision of protein–ligand interaction kinetics determined from chemical shift titrations

    International Nuclear Information System (INIS)

    Markin, Craig J.; Spyracopoulos, Leo

    2012-01-01

    NMR-monitored chemical shift titrations for the study of weak protein–ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K D ) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K D value of a 1:1 protein–ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125–138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of 1 H– 15 N 2D HSQC NMR spectra acquired using precise protein–ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k off ). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k off ∼ 3,000 s −1 in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k off from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k off values over a wide range, from 100 to 15,000 s −1 . The validity of line shape analysis for k off values approaching intermediate exchange (∼100 s −1 ), may be facilitated by more accurate K D measurements from NMR

  1. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    Science.gov (United States)

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  2. Precision Experiments at LEP

    CERN Document Server

    de Boer, Wim

    2015-01-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Hig...

  3. Precision Health Economics and Outcomes Research to Support Precision Medicine: Big Data Meets Patient Heterogeneity on the Road to Value

    Directory of Open Access Journals (Sweden)

    Yixi Chen

    2016-11-01

    Full Text Available The “big data” era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR practices will need to evolve to accommodate individual patient–level HEOR analyses. We propose the concept of “precision HEOR”, which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient.

  4. Technological advances in precision medicine and drug development.

    Science.gov (United States)

    Maggi, Elaine; Patterson, Nicole E; Montagna, Cristina

    New technologies are rapidly becoming available to expand the arsenal of tools accessible for precision medicine and to support the development of new therapeutics. Advances in liquid biopsies, which analyze cells, DNA, RNA, proteins, or vesicles isolated from the blood, have gained particular interest for their uses in acquiring information reflecting the biology of tumors and metastatic tissues. Through advancements in DNA sequencing that have merged unprecedented accuracy with affordable cost, personalized treatments based on genetic variations are becoming a real possibility. Extraordinary progress has been achieved in the development of biological therapies aimed to even further advance personalized treatments. We provide a summary of current and future applications of blood based liquid biopsies and how new technologies are utilized for the development of biological therapeutic treatments. We discuss current and future sequencing methods with an emphasis on how technological advances will support the progress in the field of precision medicine.

  5. A novel approach for pulse width measurements with a high precision (8 ps RMS) TDC in an FPGA

    International Nuclear Information System (INIS)

    Ugur, C.; Linev, S.; Schweitzer, T.; Traxler, M.; Michel, J.

    2016-01-01

    High precision time measurements are a crucial element in particle identification experiments, which likewise require pulse width information for Time-over-Threshold (ToT) measurements and charge measurements (correlated with pulse width). In almost all of the FPGA-based TDC applications, pulse width measurements are implemented using two of the TDC channels for leading and trailing edge time measurements individually. This method however, requires twice the number of resources. In this paper we present the latest precision improvements in the high precision TDC (8 ps RMS) developed before [1], as well as the novel way of measuring ToT using a single TDC channel, while still achieving high precision (as low as 11.7 ps RMS). The effect of voltage, generated by a DC-DC converter, over the precision is also discussed. Finally, the outcome of the temperature change over the pulse width measurement is shown and a correction method is suggested to limit the degradation

  6. Precision Medicine, Cardiovascular Disease and Hunting Elephants.

    Science.gov (United States)

    Joyner, Michael J

    2016-01-01

    Precision medicine postulates improved prediction, prevention, diagnosis and treatment of disease based on patient specific factors especially DNA sequence (i.e., gene) variants. Ideas related to precision medicine stem from the much anticipated "genetic revolution in medicine" arising seamlessly from the human genome project (HGP). In this essay I deconstruct the concept of precision medicine and raise questions about the validity of the paradigm in general and its application to cardiovascular disease. Thus far precision medicine has underperformed based on the vision promulgated by enthusiasts. While niche successes for precision medicine are likely, the promises of broad based transformation should be viewed with skepticism. Open discussion and debate related to precision medicine are urgently needed to avoid misapplication of resources, hype, iatrogenic interventions, and distraction from established approaches with ongoing utility. Failure to engage in such debate will lead to negative unintended consequences from a revolution that might never come. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A precision nutrient variability study of an experimental plot in ...

    African Journals Online (AJOL)

    DR F O ADEKAYODE

    reported (Sadeghi et al., 2006; Shah et al., 2013). The objective of the research was to use the GIS kriging technique to produce precision soil nutrient concentration and fertility maps of a 2.5-ha experimental land in Mukono Agricultural Research and Development. Institute Mukono, Uganda. MATERIALS AND METHODS.

  8. Morphologies of precise polyethylene-based acid copolymers and ionomers

    Science.gov (United States)

    Buitrago, C. Francisco

    identified for precise acid copolymers and ionomers at room temperature: (1) liquid-like order of aggregates dispersed throughout an amorphous PE matrix, (2) one-dimensional long-range order of aggregates in layers coexisting with PE crystals, and (3) three-dimensional periodicity of aggregates in cubic lattices in a PE matrix featuring defective packing. The liquid-like morphology is a result of high content of acid or ionic substituents deterring PE crystallinity due to steric hindrance. The layered morphology occurs when the content of pendants is low and the PE segments are long enough to crystallize. The cubic morphologies occur in precise copolymers with geminal substitution of phosphonic acid (PA) groups and long, flexible PE segments. At temperatures above the thermal transitions of the PE matrix, all but one material present a liquid-like morphology. Those conditions are ideal to study the evolution of the interaggregate spacing (d*) in X-ray scattering as a function of PE segment length between pendants, pendant type and pendant architecture (specifically, mono or geminal substitution). Also at elevated temperatures, the morphologies of precise acrylic acid (AA) copolymers and ionomers were investigated further via atomistic molecular dynamics (MD) simulations. The simulations complement X-ray scattering by providing real space visualization of the aggregates, demonstrating the occurrence of isolated, string-like and even percolated aggregate structures. This is the first dissertation completely devoted to the morphology of precise acid copolymers and precise ionomers. The complete analysis of the morphologies in these novel materials provides new insights into the shapes of aggregates in acid copolymers and ionomers in general. A key aspect of this thesis is the complementary use of experimental and simulation methods to unlock a wealth of new understanding.

  9. -Omic and Electronic Health Records Big Data Analytics for Precision Medicine

    Science.gov (United States)

    Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D.; Venugopalan, Janani; Hoffman, Ryan; Wang, May D.

    2017-01-01

    Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact. PMID:27740470

  10. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    Science.gov (United States)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  11. Principles of precision medicine in stroke.

    Science.gov (United States)

    Hinman, Jason D; Rost, Natalia S; Leung, Thomas W; Montaner, Joan; Muir, Keith W; Brown, Scott; Arenillas, Juan F; Feldmann, Edward; Liebeskind, David S

    2017-01-01

    The era of precision medicine has arrived and conveys tremendous potential, particularly for stroke neurology. The diagnosis of stroke, its underlying aetiology, theranostic strategies, recurrence risk and path to recovery are populated by a series of highly individualised questions. Moreover, the phenotypic complexity of a clinical diagnosis of stroke makes a simple genetic risk assessment only partially informative on an individual basis. The guiding principles of precision medicine in stroke underscore the need to identify, value, organise and analyse the multitude of variables obtained from each individual to generate a precise approach to optimise cerebrovascular health. Existing data may be leveraged with novel technologies, informatics and practical clinical paradigms to apply these principles in stroke and realise the promise of precision medicine. Importantly, precision medicine in stroke will only be realised once efforts to collect, value and synthesise the wealth of data collected in clinical trials and routine care starts. Stroke theranostics, the ultimate vision of synchronising tailored therapeutic strategies based on specific diagnostic data, demand cerebrovascular expertise on big data approaches to clinically relevant paradigms. This review considers such challenges and delineates the principles on a roadmap for rational application of precision medicine to stroke and cerebrovascular health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Precision medicine needs pioneering clinical bioinformaticians.

    Science.gov (United States)

    Gómez-López, Gonzalo; Dopazo, Joaquín; Cigudosa, Juan C; Valencia, Alfonso; Al-Shahrour, Fátima

    2017-10-25

    Success in precision medicine depends on accessing high-quality genetic and molecular data from large, well-annotated patient cohorts that couple biological samples to comprehensive clinical data, which in conjunction can lead to effective therapies. From such a scenario emerges the need for a new professional profile, an expert bioinformatician with training in clinical areas who can make sense of multi-omics data to improve therapeutic interventions in patients, and the design of optimized basket trials. In this review, we first describe the main policies and international initiatives that focus on precision medicine. Secondly, we review the currently ongoing clinical trials in precision medicine, introducing the concept of 'precision bioinformatics', and we describe current pioneering bioinformatics efforts aimed at implementing tools and computational infrastructures for precision medicine in health institutions around the world. Thirdly, we discuss the challenges related to the clinical training of bioinformaticians, and the urgent need for computational specialists capable of assimilating medical terminologies and protocols to address real clinical questions. We also propose some skills required to carry out common tasks in clinical bioinformatics and some tips for emergent groups. Finally, we explore the future perspectives and the challenges faced by precision medicine bioinformatics. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Precision validation of MIPAS-Envisat products

    Directory of Open Access Journals (Sweden)

    C. Piccolo

    2007-01-01

    Full Text Available This paper discusses the variation and validation of the precision, or estimated random error, associated with the ESA Level 2 products from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS. This quantity represents the propagation of the radiometric noise from the spectra through the retrieval process into the Level 2 profile values. The noise itself varies with time, steadily rising between ice decontamination events, but the Level 2 precision has a greater variation due to the atmospheric temperature which controls the total radiance received. Hence, for all species, the precision varies latitudinally/seasonally with temperature, with a small superimposed temporal structure determined by the degree of ice contamination on the detectors. The precision validation involves comparing two MIPAS retrievals at the intersections of ascending/descending orbits. For 5 days per month of full resolution MIPAS operation, the standard deviation of the matching profile pairs is computed and compared with the precision given in the MIPAS Level 2 data, except for NO2 since it has a large diurnal variation between ascending/descending intersections. Even taking into account the propagation of the pressure-temperature retrieval errors into the VMR retrieval, the standard deviation of the matching pairs is usually a factor 1–2 larger than the precision. This is thought to be due to effects such as horizontal inhomogeneity of the atmosphere and instability of the retrieval.

  14. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  15. A Precision Measurement of sin$^{2}\\theta$$_{w}$ from Semileptonic Neutrino Scattering

    CERN Document Server

    Wotschack, Jorg

    1987-01-01

    There is considerable interest in measuring the electroweak mixing parameter sin$^{2}\\Theta$$_{w}$, of the Glashow-Salam-Weinberg theory $^{1}$ as precisely as possible: first, its value may be predicted by models of Grand Unification;$^{2}$ second, precise measurements of sin$^{2}\\Theta$$_{w}$ from different processes would test the validity of electroweak radiative corrections. $^{3,$}$. Different methods have been used to determine sin$^{2}\\Theta$$_{w}$, over a large range of $Q^{2}$ values. FIGURE 1 gives a compilation of sin$^{2}\\Theta$$_{w}$ with remarkable agreement between the results. At present, it is most precisely determined in semileptonic neutrino-nucleon scattering from the ratio of neutral current (NC) to charged current (CC) cross and in proton-antiproton collisions from the W boson mass. $^{10,11}$.

  16. Precise modelling of the eye for proton therapy of intra-ocular tumours

    Energy Technology Data Exchange (ETDEWEB)

    Dobler, B.; Bendl, R. [Medizinische Physik, Deutsches Krebsforschungszentrum, INF 280, 69120 Heidelberg (Germany)]. E-mails: b.dobler@dkfz-heidelberg.de; barbara.dobler@gmx.de; r.bendl@dkfz-heidelberg.de

    2002-02-21

    A new method is described that allows precise modelling of organs at risk and target volume for radiation therapy of intra-ocular tumours. The aim is to optimize the dose distribution and thus to reduce normal tissue complication probability. A geometrical 3D model based on elliptic shapes was developed that can be used for multimodal model-based segmentation of 3D patient data. The tumour volume cannot be clearly identified in CT and MR data, whereas the tumour outline can be discriminated very precisely in fundus photographs. Therefore, a multimodal 2D fundus diagram was developed, which allows us to correlate and display simultaneously information extracted from the eye model, 3D data and the fundus photograph. Thus, the connection of fundus diagram and 3D data is well-defined and the 3D volume can be calculated directly from the tumour outline drawn onto the fundus photograph and the tumour height measured by ultrasound. The method allows the calculation of a precise 3D eye model of the patient, including the different structures of the eye as well as the tumour volume. The method was developed as part of the new 3D treatment planning system OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner-Institut Berlin. (author)

  17. Precise measurement of velocity dependent friction in rotational motion

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Junaid; Hassan, Hafsa; Shamim, Sohaib; Mahmood, Waqas; Anwar, Muhammad Sabieh, E-mail: sabieh@lums.edu.pk [School of Science and Engineering, Lahore University of Management Sciences (LUMS), Opposite Sector U, D.H.A, Lahore 54792 (Pakistan)

    2011-09-15

    Frictional losses are experimentally determined for a uniform circular disc exhibiting rotational motion. The clockwise and anticlockwise rotations of the disc, that result when a hanger tied to a thread is released from a certain height, give rise to vertical oscillations of the hanger as the thread winds and unwinds over a pulley attached to the disc. It is thus observed how the maximum height is achieved by the hanger decrements in every bounce. From the decrements, the rotational frictional losses are measured. The precision is enhanced by correlating vertical motion with the angular motion. This method leads to a substantial improvement in precision. Furthermore, the frictional torque is shown to be proportional to the angular speed. The experiment has been successfully employed in the undergraduate lab setting.

  18. DESIGN OF ROBUST NAVIGATION AND STABILIZATION LOOPS OF PRECISION ATTITUDE AND HEADING REFERENCE SYSTEM

    Directory of Open Access Journals (Sweden)

    Olha Sushchenko

    2017-11-01

    Full Text Available Purpose: The paper focuses on problems of design of robust precision attitude and heading reference systems, which can be applied in navigation of marine vehicles. The main goal is to create the optimization procedures for design of navigation and stabilization loops of the multimode gimballed system. The optimization procedure of the navigation loop design is based on the parametric robust H2/H∞-optimization. The optimization procedure of the stabilization loop design is based on the robust structural H∞-synthesis. Methods: To solve the given problem the methods of the robust control system theory and optimization methods are used. Results: The kinematical scheme of the precision gimballed attitude and heading reference system is represented. The parametrical optimization algorithm taking into consideration features of the researched system is given. Method of the mixed sensitivity relative to the researched system design is analyzed. Coefficients of the control laws of navigation loops are obtained based on optimization procedure providing compromise between accuracy and robustness. The robust controller of the stabilization loop was developed based on robust structural synthesis using method of the mixed sensitivity. Simulation of navigation and stabilization processes is carried out. Conclusions: The represented results prove efficiency of the proposed procedures, which can be useful for design of precision navigation systems of the moving vehicles.

  19. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  20. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  1. MITP Workshop on Low-Energy Precision Physics

    CERN Document Server

    2013-01-01

    The scientific program will be focussed on the theory of low-energy precision physics relevant to the MESA and TRIGA initiatives. Topics include searches for TeV-scale physics beyond the Standard Model via ultra-precise measurements of parity-violating electron scattering asymmetries, determinations of neutron decay parameters via precision measurements of its lifetime and decay asymmetries, and searches for EDMs of nucleons, nuclei and atoms. The necessary high-precision theoretical tools to analyse these experiments, which include advanced calculations of radiative corrections, will be explored and developed.

  2. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.

    Science.gov (United States)

    Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan

    2015-08-14

    High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.

  3. DIFFERENCES IN TECHNICAL MOVEMENT PRECISION WITH BALL TO NEW AGES SOCCER PLAYERS

    Directory of Open Access Journals (Sweden)

    Sami Sermaxhaj

    2015-05-01

    Full Text Available The primary goal of this research is to compare the accuracy of the collision of the ball to players of both age groups U-17 and U-19. The research was conducted on a sample of 100 young soccer players Kosovo divided into two groups: the first group comprised of 50 young players U17 and second group comprised of 50 young soccer players U-19. To assess the precision of the attack on the ball all the players they have subjected technical demonstration testing in four tests: (T-JUGGL, T-PASI, T-KROS, T-SHOOT. Through T-test method are proven differences in favor of players U-19 to U-17 in all tests of precision technical movements with the ball, but statistically significant differences are shown in the collision test at the gateway (T-SHOOT and the test ball juggling (T-JUGGL . The results obtained show us that more experienced players U-19 have a better precision in demonstrating the technical movements with the ball, compared to U-17 players. Assuming that the training process is very important to voice learning, mastering demonstration, acquisition, and precise technical movements with the ball, because the precision peak levels is very significant.

  4. Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms

    Directory of Open Access Journals (Sweden)

    Qianqian Wu

    2015-08-01

    Full Text Available High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.

  5. High precision NC lathe feeding system rigid-flexible coupling model reduction technology

    Science.gov (United States)

    Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai

    2017-08-01

    This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.

  6. Precision of Points Computed from Intersections of Lines or Planes

    DEFF Research Database (Denmark)

    Cederholm, Jens Peter

    2004-01-01

    estimates the precision of the points. When using laser scanning a similar problem appears. A laser scanner captures a 3-D point cloud, not the points of real interest. The suggested method can be used to compute three-dimensional coordinates of the intersection of three planes estimated from the point...

  7. A Demonstration of Improved Precision of Word Recognition Scores

    Science.gov (United States)

    Schlauch, Robert S.; Anderson, Elizabeth S.; Micheyl, Christophe

    2014-01-01

    Purpose: The purpose of this study was to demonstrate improved precision of word recognition scores (WRSs) by increasing list length and analyzing phonemic errors. Method: Pure-tone thresholds (frequencies between 0.25 and 8.0 kHz) and WRSs were measured in 3 levels of speech-shaped noise (50, 52, and 54 dB HL) for 24 listeners with normal…

  8. Quantization and training of object detection networks with low-precision weights and activations

    Science.gov (United States)

    Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie

    2018-01-01

    As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.

  9. Environmental Testing for Precision Parts and Instruments

    International Nuclear Information System (INIS)

    Choi, Man Yong; Park, Jeong Hak; Yun, Kyu Tek

    2001-01-01

    Precision parts and instruments are tested to evaluate performance in development-process and product-step to prement a potential defect due to a failure design. In this paper, Environmental test technology, which is the basis of reliability analysis, is introduced with examples of test criterion, test method for products, encoder and traffic signal controller, and measuring instruments. Recently, as the importance of the environmental test technology is recognised. It is proposed that training of test technician and technology of jig design and failure analysis are very essential

  10. Precision surveying the principles and geomatics practice

    CERN Document Server

    Ogundare, John Olusegun

    2016-01-01

    A comprehensive overview of high precision surveying, including recent developments in geomatics and their applications This book covers advanced precision surveying techniques, their proper use in engineering and geoscience projects, and their importance in the detailed analysis and evaluation of surveying projects. The early chapters review the fundamentals of precision surveying: the types of surveys; survey observations; standards and specifications; and accuracy assessments for angle, distance and position difference measurement systems. The book also covers network design and 3-D coordinating systems before discussing specialized topics such as structural and ground deformation monitoring techniques and analysis, mining surveys, tunneling surveys, and alignment surveys. Precision Surveying: The Principles and Geomatics Practice: * Covers structural and ground deformation monitoring analysis, advanced techniques in mining and tunneling surveys, and high precision alignment of engineering structures *...

  11. Precision medicine at the crossroads.

    Science.gov (United States)

    Olson, Maynard V

    2017-10-11

    There are bioethical, institutional, economic, legal, and cultural obstacles to creating the robust-precompetitive-data resource that will be required to advance the vision of "precision medicine," the ability to use molecular data to target therapies to patients for whom they offer the most benefit at the least risk. Creation of such an "information commons" was the central recommendation of the 2011 report Toward Precision Medicine issued by a committee of the National Research Council of the USA (Committee on a Framework for Development of a New Taxonomy of Disease; National Research Council. Toward precision medicine: building a knowledge network for biomedical research and a new taxonomy of disease. 2011). In this commentary, I review the rationale for creating an information commons and the obstacles to doing so; then, I endorse a path forward based on the dynamic consent of research subjects interacting with researchers through trusted mediators. I assert that the advantages of the proposed system overwhelm alternative ways of handling data on the phenotypes, genotypes, and environmental exposures of individual humans; hence, I argue that its creation should be the central policy objective of early efforts to make precision medicine a reality.

  12. Fine structures of atomic excited states: precision atomic spectroscopy and electron-ion collision process

    International Nuclear Information System (INIS)

    Gao Xiang; Cheng Cheng; Li Jiaming

    2011-01-01

    Scientific research fields for future energies such as inertial confinement fusion researches and astrophysics studies especially with satellite observatories advance into stages of precision physics. The relevant atomic data are not only enormous but also of accuracy according to requirements, especially for both energy levels and the collision data. The fine structure of high excited states of atoms and ions can be measured by precision spectroscopy. Such precision measurements can provide not only knowledge about detailed dynamics of electron-ion interactions but also a bench mark examination of the accuracy of electron-ion collision data, especially incorporating theoretical computations. We illustrate that by using theoretical calculation methods which can treat the bound states and the adjacent continua on equal footing. The precision spectroscopic measurements of excited fine structures can be served as stringent tests of electron-ion collision data. (authors)

  13. Precision medicine for advanced prostate cancer.

    Science.gov (United States)

    Mullane, Stephanie A; Van Allen, Eliezer M

    2016-05-01

    Precision cancer medicine, the use of genomic profiling of patient tumors at the point-of-care to inform treatment decisions, is rapidly changing treatment strategies across cancer types. Precision medicine for advanced prostate cancer may identify new treatment strategies and change clinical practice. In this review, we discuss the potential and challenges of precision medicine in advanced prostate cancer. Although primary prostate cancers do not harbor highly recurrent targetable genomic alterations, recent reports on the genomics of metastatic castration-resistant prostate cancer has shown multiple targetable alterations in castration-resistant prostate cancer metastatic biopsies. Therapeutic implications include targeting prevalent DNA repair pathway alterations with PARP-1 inhibition in genomically defined subsets of patients, among other genomically stratified targets. In addition, multiple recent efforts have demonstrated the promise of liquid tumor profiling (e.g., profiling circulating tumor cells or cell-free tumor DNA) and highlighted the necessary steps to scale these approaches in prostate cancer. Although still in the initial phase of precision medicine for prostate cancer, there is extraordinary potential for clinical impact. Efforts to overcome current scientific and clinical barriers will enable widespread use of precision medicine approaches for advanced prostate cancer patients.

  14. Precision forging technology for aluminum alloy

    Science.gov (United States)

    Deng, Lei; Wang, Xinyun; Jin, Junsong; Xia, Juchen

    2018-03-01

    Aluminum alloy is a preferred metal material for lightweight part manufacturing in aerospace, automobile, and weapon industries due to its good physical properties, such as low density, high specific strength, and good corrosion resistance. However, during forging processes, underfilling, folding, broken streamline, crack, coarse grain, and other macro- or microdefects are easily generated because of the deformation characteristics of aluminum alloys, including narrow forgeable temperature region, fast heat dissipation to dies, strong adhesion, high strain rate sensitivity, and large flow resistance. Thus, it is seriously restricted for the forged part to obtain precision shape and enhanced property. In this paper, progresses in precision forging technologies of aluminum alloy parts were reviewed. Several advanced precision forging technologies have been developed, including closed die forging, isothermal die forging, local loading forging, metal flow forging with relief cavity, auxiliary force or vibration loading, casting-forging hybrid forming, and stamping-forging hybrid forming. High-precision aluminum alloy parts can be realized by controlling the forging processes and parameters or combining precision forging technologies with other forming technologies. The development of these technologies is beneficial to promote the application of aluminum alloys in manufacturing of lightweight parts.

  15. Apparatus for precision micromachining with lasers

    Science.gov (United States)

    Chang, J.J.; Dragon, E.P.; Warner, B.E.

    1998-04-28

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialographic sections of machined parts show little (submicron scale) recast layer and heat affected zone. 1 fig.

  16. Equity and Value in 'Precision Medicine'.

    Science.gov (United States)

    Gray, Muir; Lagerberg, Tyra; Dombrádi, Viktor

    2017-04-01

    Precision medicine carries huge potential in the treatment of many diseases, particularly those with high-penetrance monogenic underpinnings. However, precision medicine through genomic technologies also has ethical implications. We will define allocative, personal, and technical value ('triple value') in healthcare and how this relates to equity. Equity is here taken to be implicit in the concept of triple value in countries that have publicly funded healthcare systems. It will be argued that precision medicine risks concentrating resources to those that already experience greater access to healthcare and power in society, nationally as well as globally. Healthcare payers, clinicians, and patients must all be involved in optimising the potential of precision medicine, without reducing equity. Throughout, the discussion will refer to the NHS RightCare Programme, which is a national initiative aiming to improve value and equity in the context of NHS England.

  17. Precision and reproducibility in AMS radiocarbon measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Hotchkis, M.A.; Fink, D.; Hua, Q.; Jacobsen, G.E.; Lawson, E. M.; Smith, A.M.; Tuniz, C. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1996-12-31

    Accelerator Mass Spectrometry (AMS) is a technique by which rare radioisotopes such as {sup 14}C can be measured at environmental levels with high efficiency. Instead of detecting radioactivity, which is very weak for long-lived environmental radioisotopes, atoms are counted directly. The sample is placed in an ion source, from which a negative ion beam of the atoms of interest is extracted, mass analysed, and injected into a tandem accelerator. After stripping to positive charge states in the accelerator HV terminal, the ions are further accelerated, analysed with magnetic and electrostatic devices and counted in a detector. An isotopic ratio is derived from the number of radioisotope atoms counted in a given time and the beam current of a stable isotope of the same element, measured after the accelerator. For radiocarbon, {sup 14}C/{sup 13}C ratios are usually measured, and the ratio of an unknown sample is compared to that of a standard. The achievable precision for such ratio measurements is limited primarily by {sup 14}C counting statistics and also by a variety of factors related to accelerator and ion source stability. At the ANTARES AMS facility at Lucas Heights Research Laboratories we are currently able to measure {sup 14}C with 0.5% precision. In the two years since becoming operational, more than 1000 {sup 14}C samples have been measured. Recent improvements in precision for {sup 14}C have been achieved with the commissioning of a 59 sample ion source. The measurement system, from sample changing to data acquisition, is under common computer control. These developments have allowed a new regime of automated multi-sample processing which has impacted both on the system throughput and the measurement precision. We have developed data evaluation methods at ANTARES which cross-check the self-consistency of the statistical analysis of our data. Rigorous data evaluation is invaluable in assessing the true reproducibility of the measurement system and aids in

  18. Precision and reproducibility in AMS radiocarbon measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Hotchkis, M A; Fink, D; Hua, Q; Jacobsen, G E; Lawson, E M; Smith, A M; Tuniz, C [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1997-12-31

    Accelerator Mass Spectrometry (AMS) is a technique by which rare radioisotopes such as {sup 14}C can be measured at environmental levels with high efficiency. Instead of detecting radioactivity, which is very weak for long-lived environmental radioisotopes, atoms are counted directly. The sample is placed in an ion source, from which a negative ion beam of the atoms of interest is extracted, mass analysed, and injected into a tandem accelerator. After stripping to positive charge states in the accelerator HV terminal, the ions are further accelerated, analysed with magnetic and electrostatic devices and counted in a detector. An isotopic ratio is derived from the number of radioisotope atoms counted in a given time and the beam current of a stable isotope of the same element, measured after the accelerator. For radiocarbon, {sup 14}C/{sup 13}C ratios are usually measured, and the ratio of an unknown sample is compared to that of a standard. The achievable precision for such ratio measurements is limited primarily by {sup 14}C counting statistics and also by a variety of factors related to accelerator and ion source stability. At the ANTARES AMS facility at Lucas Heights Research Laboratories we are currently able to measure {sup 14}C with 0.5% precision. In the two years since becoming operational, more than 1000 {sup 14}C samples have been measured. Recent improvements in precision for {sup 14}C have been achieved with the commissioning of a 59 sample ion source. The measurement system, from sample changing to data acquisition, is under common computer control. These developments have allowed a new regime of automated multi-sample processing which has impacted both on the system throughput and the measurement precision. We have developed data evaluation methods at ANTARES which cross-check the self-consistency of the statistical analysis of our data. Rigorous data evaluation is invaluable in assessing the true reproducibility of the measurement system and aids in

  19. LASL lens design procedure: simple, fast, precise, versatile

    International Nuclear Information System (INIS)

    Brixner, B.

    1978-11-01

    The Los Alamos Scientific Laboratory general-purpose lens design procedure optimizes specific lens prescriptions to obtain the smallest possible image spots and therefore near-spherical wave fronts of light converging on all images in the field of view. Optical image errors are analyzed in much the same way that they are measured on the optical bench. This lens design method is made possible by using the full capabilities of large electronic computers. First, the performance of the whole lens is sampled with many precisely traced skew rays. Next, lens performance is analyzed with spot diagrams generated by the many rays. Third, lens performance is optimized with a least squares system aimed at reducing all image errors to zero. This statistical approach to lens design uses skew rays and precisely measured ray deviations from ideal image points to achieve greater accuracy than was possible with the classical procedure, which is based on approximate expressions derived from simplified ray traces developed for pencil-and-paper calculations

  20. Defending the Decimals: Why Foolishly False Precision Might Strengthen Social Science

    Directory of Open Access Journals (Sweden)

    Jeremy Freese

    2014-12-01

    Full Text Available Social scientists often report regression coefficients using more significant figures than are meaningful given measurement precision and sample size. Common sense says we should not do this. Yet, as normative practice, eliminating these extra digits introduces a more serious scientific problem when accompanied by other ascendant reporting practices intended to reduce social science’s long-standing emphasis on null hypothesis significance testing. Coefficient p-values can no longer be recovered to the degree of precision that p-values have been abundantly demonstrated to influence actual research practice. Developing methods for detecting and addressing systematically exaggerated effect sizes across collections of studies cannot be done effectively if p-values are hidden. Regarding what is preferable for scientific literature versus an individual study, the costs of false precision are therefore innocuous compared to alternatives that either encourage the continuation of practices known to exaggerate causal effects or thwart assessment of how much such exaggeration occurs.

  1. High-precision measurement of the 19Ne half-life and implications for right-handed weak currents.

    Science.gov (United States)

    Triambak, S; Finlay, P; Sumithrarachchi, C S; Hackman, G; Ball, G C; Garrett, P E; Svensson, C E; Cross, D S; Garnsworthy, A B; Kshetri, R; Orce, J N; Pearson, M R; Tardiff, E R; Al-Falou, H; Austin, R A E; Churchman, R; Djongolov, M K; D'Entremont, R; Kierans, C; Milovanovic, L; O'Hagan, S; Reeve, S; Sjue, S K L; Williams, S J

    2012-07-27

    We report a precise determination of the (19)Ne half-life to be T(1/2)=17.262±0.007 s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.

  2. High-Precision Measurement of the Ne19 Half-Life and Implications for Right-Handed Weak Currents

    Science.gov (United States)

    Triambak, S.; Finlay, P.; Sumithrarachchi, C. S.; Hackman, G.; Ball, G. C.; Garrett, P. E.; Svensson, C. E.; Cross, D. S.; Garnsworthy, A. B.; Kshetri, R.; Orce, J. N.; Pearson, M. R.; Tardiff, E. R.; Al-Falou, H.; Austin, R. A. E.; Churchman, R.; Djongolov, M. K.; D'Entremont, R.; Kierans, C.; Milovanovic, L.; O'Hagan, S.; Reeve, S.; Sjue, S. K. L.; Williams, S. J.

    2012-07-01

    We report a precise determination of the Ne19 half-life to be T1/2=17.262±0.007s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.

  3. Principles of Precision Prevention Science for Improving Recruitment and Retention of Participants.

    Science.gov (United States)

    Supplee, Lauren H; Parekh, Jenita; Johnson, Makedah

    2018-03-12

    Precision medicine and precision public health focus on identifying and providing the right intervention to the right population at the right time. Expanding on the concept, precision prevention science could allow the field to examine prevention programs to identify ways to make them more efficient and effective at scale, including addressing issues related to engagement and retention of participants. Research to date on engagement and retention has often focused on demographics and risk factors. The current paper proposes using McCurdy and Daro (Family Relations, 50, 113-121, 2001) model that posits a complex mixture of individual, provider, program, and community-level factors synergistically affect enrollment, engagement, and retention. The paper concludes recommending the use of research-practice partnerships and innovative, rapid cycle methods to design and improve prevention programs related to participant engagement and retention at scale.

  4. Glass ceramic ZERODUR enabling nanometer precision

    Science.gov (United States)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  5. High precision timing in a FLASH

    Energy Technology Data Exchange (ETDEWEB)

    Hoek, Matthias; Cardinali, Matteo; Dickescheid, Michael; Schlimme, Soeren; Sfienti, Concettina; Spruck, Bjoern; Thiel, Michaela [Institut fuer Kernphysik, Johannes Gutenberg-Universitaet Mainz (Germany)

    2016-07-01

    A segmented highly precise start counter (FLASH) was designed and constructed at the Institute for Nuclear Physics in Mainz. Besides determining a precise reference time, a Time-of-Flight measurement can be performed with two identical FLASH units. Thus, particle identification can be provided for mixed hadron beam environments. The detector design is based on the detection of Cherenkov light produced in fused silica radiator bars with fast multi-anode MCP-PMTs. The segmentation of the radiator improves the timing resolution while allowing a coarse position resolution along one direction. Both, the arrival time and the Time-over-Threshold are determined by the readout electronics, which enables walk correction of the arrival time. The performance of two FLASH units was investigated in test experiments at the Mainz Microton (MAMI) using an electron beam with an energy of 855 MeV and at CERN's PS T9 beam line with a mixed hadron beam with momenta between 3-8 GeV/c. Effective Time-walk correction methods based on Time-over-Threshold were developed for the data analysis. The achieved Time-Of-Flight resolution after applying all corrections was found to be 70 ps. Furthermore, the PID and position resolution capabilities are discussed in this contribution.

  6. Ultra-precision turning of complex spiral optical delay line

    Science.gov (United States)

    Zhang, Xiaodong; Li, Po; Fang, Fengzhou; Wang, Qichang

    2011-11-01

    Optical delay line (ODL) implements the vertical or depth scanning of optical coherence tomography, which is the most important factor affecting the scanning resolution and speed. The spinning spiral mirror is found as an excellent optical delay device because of the high-speed and high-repetition-rate. However, it is one difficult task to machine the mirror due to the special shape and precision requirement. In this paper, the spiral mirror with titled parabolic generatrix is proposed, and the ultra-precision turning method is studied for its machining using the spiral mathematic model. Another type of ODL with the segmental shape is also introduced and machined to make rotation balance for the mass equalization when scanning. The efficiency improvement is considered in details, including the rough cutting with the 5- axis milling machine, the machining coordinates unification, and the selection of layer direction in turning. The onmachine measuring method based on stylus gauge is designed to analyze the shape deviation. The air bearing is used as the measuring staff and the laser interferometer sensor as the position sensor, whose repeatability accuracy is proved up to 10nm and the stable feature keeps well. With this method developed, the complex mirror with nanometric finish of 10.7nm in Ra and the form error within 1um are achieved.

  7. Precision lifetime measurements using the recoil distance method

    International Nuclear Information System (INIS)

    Kruecken, R.

    2000-01-01

    The recoil distance method (RDM) for the measurements of lifetimes of excited nuclear levels in the range from about 1 ps to 1,000 ps is reviewed. The New Yale Plunger Device for RDM experiments is introduced and the Differential Decay Curve Method for their analysis is reviewed. Results from recent RDM experiments on SD bands in the mass-190 region, shears bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. Perspectives for the use of RDM measurements in the study of neutron-rich nuclei are discussed

  8. Precision Lifetime Measurements Using the Recoil Distance Method

    Science.gov (United States)

    Krücken, R.

    2000-01-01

    The recoil distance method (RDM) for the measurements of lifetimes of excited nuclear levels in the range from about 1 ps to 1000 ps is reviewed. The New Yale Plunger Device for RDM experiments is introduced and the Differential Decay Curve Method for their analysis is reviewed. Results from recent RDM experiments on SD bands in the mass-190 region, shears bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. Perspectives for the use of RDM measurements in the study of neutron-rich nuclei are discussed. PMID:27551587

  9. High precision localization of intracerebral hemorrhage based on 3D MPR on head CT images

    Science.gov (United States)

    Sun, Jianyong; Hou, Xiaoshuai; Sun, Shujie; Zhang, Jianguo

    2017-03-01

    The key step for minimally invasive intracerebral hemorrhage surgery is precisely positioning the hematoma location in the brain before and during the hematoma surgery, which can significantly improves the success rate of puncture hematoma. We designed a 3D computerized surgical plan (CSP) workstation precisely to locate brain hematoma based on Multi-Planar Reconstruction (MPR) visualization technique. We used ten patients' CT/MR studies to verify our designed CSP intracerebral hemorrhage localization method. With the doctor's assessment and comparing with the results of manual measurements, the output of CSP WS for hematoma surgery is more precise and reliable than manual procedure.

  10. A passion for precision

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit

    2006-01-01

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.

  11. Improving Precision of Types

    DEFF Research Database (Denmark)

    Winther, Johnni

    Types in programming languages provide a powerful tool for the programmer to document the code so that a large aspect of the intent can not only be presented to fellow programmers but also be checked automatically by compilers. The precision with which types model the behavior of programs...... is crucial to the quality of these automated checks, and in this thesis we present three different improvements to the precision of types in three different aspects of the Java programming language. First we show how to extend the type system in Java with a new type which enables the detection of unintended...

  12. NCI Precision Medicine

    Science.gov (United States)

    This illustration represents the National Cancer Institute’s support of research to improve precision medicine in cancer treatment, in which unique therapies treat an individual’s cancer based on specific genetic abnormalities of that person’s tumor.

  13. Analyzing logistic map pseudorandom number generators for periodicity induced by finite precision floating-point representation

    International Nuclear Information System (INIS)

    Persohn, K.J.; Povinelli, R.J.

    2012-01-01

    Highlights: ► A chaotic pseudorandom number generator (C-PRNG) poorly explores the key space. ► A C-PRNG is finite and periodic when implemented on a finite precision computer. ► We present a method to determine the period lengths of a C-PRNG. - Abstract: Because of the mixing and aperiodic properties of chaotic maps, such maps have been used as the basis for pseudorandom number generators (PRNGs). However, when implemented on a finite precision computer, chaotic maps have finite and periodic orbits. This manuscript explores the consequences finite precision has on the periodicity of a PRNG based on the logistic map. A comparison is made with conventional methods of generating pseudorandom numbers. The approach used to determine the number, delay, and period of the orbits of the logistic map at varying degrees of precision (3 to 23 bits) is described in detail, including the use of the Condor high-throughput computing environment to parallelize independent tasks of analyzing a large initial seed space. Results demonstrate that in terms of pathological seeds and effective bit length, a PRNG based on the logistic map performs exponentially worse than conventional PRNGs.

  14. High Precision Seawater Sr/Ca Measurements in the Florida Keys by Inductively Coupled Plasma Atomic Emission Spectrometry: Analytical Method and Implications for Coral Paleothermometry

    Science.gov (United States)

    Khare, A.; Kilbourne, K. H.; Schijf, J.

    2017-12-01

    Standard methods of reconstructing past sea surface temperatures (SSTs) with coral skeletal Sr/Ca ratios assume the seawater Sr/Ca ratio is constant. However, there is little data to support this assumption, in part because analytical techniques capable of determining seawater Sr/Ca with sufficient accuracy and precision are expensive and time consuming. We demonstrate a method to measure seawater Sr/Ca using inductively coupled plasma atomic emission spectrometry where we employ an intensity ratio calibration routine that reduces the self- matrix effects of calcium and cancels out the matrix effects that are common to both calcium and strontium. A seawater standard solution cross-calibrated with multiple instruments is used to correct for long-term instrument drift and any remnant matrix effects. The resulting method produces accurate seawater Sr/Ca determinations rapidly, inexpensively, and with a precision better than 0.2%. This method will make it easier for coral paleoclimatologists to quantify potentially problematic fluctuations in seawater Sr/Ca at their study locations. We apply our method to test for variability in surface seawater Sr/Ca along the Florida Keys Reef Tract. We are collecting winter and summer samples for two years in a grid with eleven nearshore to offshore transects across the reef, as well as continuous samples collected by osmotic pumps at four locations adjacent to our grid. Our initial analysis of the grid samples indicates a trend of decreasing Sr/Ca values offshore potentially due to a decreasing groundwater influence. The values differ by as much as 0.05 mmol/mol which could lead to an error of 1°C in mean SST reconstructions. Future work involves continued sampling in the Florida Keys to test for seasonal and interannual variability in seawater Sr/Ca, as well as collecting data from small reefs in the Virgin Islands to test the stability of seawater Sr/Ca under different geologic, hydrologic and hydrographic environments.

  15. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds; Metodo de titulacao potenciometrica de alta precisao semi-automatizado para a caracterizacao de compostos de uranio

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da, E-mail: barbara@ird.gov.b, E-mail: fabio@ird.gov.b, E-mail: pedrodio@ird.gov.b, E-mail: radier@ird.gov.b, E-mail: delgado@ird.gov.b, E-mail: wanderley@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lopes, Ricardo T., E-mail: ricardo@lin.ufrj.b [Universidade Federal do Rio de Janeiro (LIN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    2011-10-26

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  16. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations

    Energy Technology Data Exchange (ETDEWEB)

    Markin, Craig J.; Spyracopoulos, Leo, E-mail: leo.spyracopoulos@ualberta.ca [University of Alberta, Department of Biochemistry (Canada)

    2012-12-15

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K{sub D}) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K{sub D} value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of {sup 1}H-{sup 15}N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k{sub off}). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k{sub off} {approx} 3,000 s{sup -1} in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k{sub off} from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k{sub off} values over a wide range, from 100 to 15,000 s{sup -1}. The validity of line shape analysis for k{sub off} values approaching intermediate exchange ({approx}100 s{sup -1}), may be facilitated by

  17. Research regarding the influence of driving-wires length change on positioning precision of a robotic arm

    Science.gov (United States)

    Ciofu, C.; Stan, G.

    2016-08-01

    The paper emphasise positioning precision of an elephant's trunk robotic arm which has joints driven by wires with variable length while operating The considered 5 degrees of freedom robotic arm has a particular structure of joint that makes possible inner actuation with wire-driven mechanism. We analyse solely the length change of wires as a consequence due inner winding and unwinding on joints for certain values of rotational angles. Variations in wires length entail joint angular displacements. We analyse positioning precision by taking into consideration equations from inverse kinematics of the elephant's trunk robotic arm. The angular displacements of joints are considered into computational method after partial derivation of positioning equations. We obtain variations of wires length at about tenths of micrometers. These variations employ angular displacements which are about minutes of sexagesimal degree and, thus, define positioning precision of elephant's trunk robotic arms. The analytical method is used for determining aftermath design structure of an elephant's trunk robotic arm with inner actuation through wires on positioning precision. Thus, designers could take suitable decisions on accuracy specifications limits of the robotic arm.

  18. High precision spectrophotometric analysis of thorium

    International Nuclear Information System (INIS)

    Palmieri, H.E.L.

    1984-01-01

    An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium when processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using dissodium ethylenediaminetetraacetate (EDTA) solution and alizarin-S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer programme. Besides the equivalence point, other parameters of titration were determined: the indicator concentration, the absorbance of the metal-indicator complex, and the stability constants of the metal-indicator and the metal-EDTA complexes. (Author) [pt

  19. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Science.gov (United States)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  20. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Energy Technology Data Exchange (ETDEWEB)

    He Bin [Division of Nuclear Medicine, Department of Radiology, New York Presbyterian Hospital-Weill Medical College of Cornell University, New York, NY 10021 (United States); Frey, Eric C, E-mail: bih2006@med.cornell.ed, E-mail: efrey1@jhmi.ed [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21287-0859 (United States)

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed {sup 111}In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations

  1. A new, direct analytical method using LC-MS/MS for fatty acid esters of 3-chloro-1,2-propanediol (3-MCPD esters) in edible oils.

    Science.gov (United States)

    Yamazaki, K; Ogiso, M; Isagawa, S; Urushiyama, T; Ukena, T; Kibune, N

    2013-01-01

    A new, direct analytical method for the determination of 3-chloro-1,2-propanediol fatty acid esters (3-MCPD esters) was developed. The targeted 3-MCPD esters included five types of monoester and 25 [corrected] types of diester. Samples (oils and fats) were dissolved in a mixture of tert-butyl methyl ether and ethyl acetate (4:1), purified using two solid-phase extraction (SPE) cartridges (C(18) and silica), then analysed by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Five monoesters and five diesters with the same fatty acid group could be separated and quantified. Pairs of 3-MCPD diesters carrying the same two different fatty acid groups, but at reversed positions (sn-1 and sn-2), could not be separated and so were expressed as a sum of both compounds. The limits of quantification (LOQs) were estimated to be between 0.02 to 0.08 mg kg(-1), depending on the types of 3-MCPD ester. Repeatability expressed as relative standard deviation (RSD(r)%) varied from 5.5% to 25.5%. The new method was shown to be applicable to various commercial edible oils and showed levels of 3-MCPD esters varying from 0.58 to 25.35 mg kg(-1). The levels of mono- and diesters ranged from 0.10 to 0.69 mg kg(-1) and from 0.06 to 16 mg kg(-1), respectively.

  2. Comparison of the precision of three commonly used GPS models

    Directory of Open Access Journals (Sweden)

    E Chavoshi

    2016-04-01

    Full Text Available Introduction: Development of science in various fields has caused change in the methods to determine geographical location. Precision farming involves new technology that provides the opportunity for farmers to change in factors such as nutrients, soil moisture available to plants, soil physical and chemical characteristics and other factors with the spatial resolution of less than a centimeter to several meters to monitor and evaluate. GPS receivers based on precision farming operations specified accuracies are used in the following areas: 1 monitoring of crop and soil sampling (less than one meter accuracy 2 use of fertilizer, pesticide and seed work (less than half a meter accuracy 3 Transplantation and row cultivation (precision of less than 4 cm (Perez et al., 2011. In one application of GPS in agriculture, route guidance precision farming tractors in the fields was designed to reduce the transmission error that deviate from the path specified in the range of 50 to 300 mm driver informed and improved way to display (Perez et al., 2011. In another study, the system automatically guidance, based on RTK-GPS technology, precision tillage operations was used between and within the rows very close to the drip irrigation pipe and without damage to their crops at a distance of 50 mm (Abidine et al., 2004. In another study, to compare the accuracy and precision of the receivers, 5 different models of Trimble Mark GPS devices from 15 stations were mapped, the results indicated that minimum error was related to Geo XT model with an accuracy of 91 cm and maximum error was related to Pharos model with an accuracy of 5.62 m (Kindra et al., 2006. Due to the increasing use of GPS receivers in agriculture as well as the lack of trust on the real accuracy and precision of receivers, this study aimed to compare the positioning accuracy and precision of three commonly used GPS receivers models used to specify receivers with the lowest error for precision

  3. Assessment of Sr-90 in water samples: precision and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P., E-mail: mbnisti@ipen.br, E-mail: chsaueia@ipen.br, E-mail: bcastilho@ipen.br, E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The {sup 90}Sr was one of the elements released into the environment. The {sup 90}Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of {sup 90}Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using {sup 90}Sr standard solutions. (author)

  4. High precision straw tube chamber with cathode readout

    International Nuclear Information System (INIS)

    Bychkov, V.N.; Golutvin, I.A.; Ershov, Yu.V.

    1992-01-01

    The high precision straw chamber with cathode readout was constructed and investigated. The 10 mm straws were made of aluminized mylar strip with transparent longitudinal window. The X coordinate information has been taken from the cathode strips as induced charges and investigated via centroid method. The spatial resolution σ=120 μm has been obtained with signal/noise ratio about 60. The possible ways for improving the signal/noise ratio have been described. 7 refs.; 8 figs

  5. The Development of Precise Engineering Surveying Technology

    Directory of Open Access Journals (Sweden)

    LI Guangyun

    2017-10-01

    Full Text Available With the construction of big science projects in China, the precise engineering surveying technology developed rapidly in the 21th century. Firstly, the paper summarized up the current development situation for the precise engineering surveying instrument and theory. Then the three typical cases of the precise engineering surveying practice such as accelerator alignment, industry measurement and high-speed railway surveying technology are focused.

  6. Sensing Characteristics of A Precision Aligner Using Moire Gratings for Precision Alignment System

    Institute of Scientific and Technical Information of China (English)

    ZHOU Lizhong; Hideo Furuhashi; Yoshiyuki Uchida

    2001-01-01

    Sensing characteristics of a precision aligner using moire gratings for precision alignment sysem has been investigated. A differential moire alignment system and a modified alignment system were used. The influence of the setting accuracy of the gap length and inclination of gratings on the alignment accuracy has been studied experimentally and theoretically. Setting accuracy of the gap length less than 2.5μm is required in modified moire alignment. There is no influence of the gap length on the alignment accuracy in the differential alignment system. The inclination affects alignment accuracies in both differential and modified moire alignment systems.

  7. Precision medicine: In need of guidance and surveillance.

    Science.gov (United States)

    Lin, Jian-Zhen; Long, Jun-Yu; Wang, An-Qiang; Zheng, Ying; Zhao, Hai-Tao

    2017-07-28

    Precision medicine, currently a hotspot in mainstream medicine, has been strongly promoted in recent years. With rapid technological development, such as next-generation sequencing, and fierce competition in molecular targeted drug exploitation, precision medicine represents an advance in science and technology; it also fulfills needs in public health care. The clinical translation and application of precision medicine - especially in the prevention and treatment of tumors - is far from satisfactory; however, the aims of precision medicine deserve approval. Thus, this medical approach is currently in its infancy; it has promising prospects, but it needs to overcome numbers of problems and deficiencies. It is expected that in addition to conventional symptoms and signs, precision medicine will define disease in terms of the underlying molecular characteristics and other environmental susceptibility factors. Those expectations should be realized by constructing a novel data network, integrating clinical data from individual patients and personal genomic background with existing research on the molecular makeup of diseases. In addition, multi-omics analysis and multi-discipline collaboration will become crucial elements in precision medicine. Precision medicine deserves strong support, and its development demands directed momentum. We propose three kinds of impetus (research, application and collaboration impetus) for such directed momentum toward promoting precision medicine and accelerating its clinical translation and application.

  8. Methods of preparation of fatty acid methyl esters (FAME. Statistical assessment of the precision characteristics from a collaborative trial

    Directory of Open Access Journals (Sweden)

    Pérez-Camino, M. C.

    2000-12-01

    Full Text Available The official regulations for the control of the olive and olive pomace oils of the European Union (EU and International Olive Oil Council (IOOC include the determination of fatty acids in order to be applied to several purity criteria. The determination of fatty acids require the preparation of the fatty acid methyl esters (FAME for the subsequent analysis by gas chromatography with good precision and reproducibility. Among the methods used in the laboratories of both the industries and the official institutions looking after the olive oil control, the ones selected were: 1 cold methylation with methanolic potash and 2 hot methylation with sodium methylate followed by acidification with sulphuric acid in methanol and heating. A statistical assessment of the precision characteristics were performed on the determination of fatty acids using both methods by a collaborative trial following the directions included in the AOAC regulation (AOAC 1995. In oils with low acidities, the results obtained for both methylation methods were equivalent. However, the olivepomace oil sample (acidity 15.5% showed significative differences between the fatty acid compositions obtained using both methylation methods. Finally, the methylation with the acidic+basic method did not yield an increase of the trans-isomers of the fatty acids.Los métodos oficiales para el control del aceite de oliva y de orujo de oliva de la Unión Europea (UE y del Comité Oleícola Internacional (COI incluyen la determinación de ácidos grasos en la aplicación de varios criterios de pureza. La determinación de ácidos grasos requiere la preparación de los ésteres metílicos de los ácidos grasos (FAME y su posterior análisis mediante cromatografía de gases con una buena repetibilidad y reproducibilidad. Entre los muchos métodos usados por los laboratorios de la industria y de los organismos oficiales de control, se seleccionaron los siguientes: 1 metilación en frío con potasa

  9. Five critical elements to ensure the precision medicine.

    Science.gov (United States)

    Chen, Chengshui; He, Mingyan; Zhu, Yichun; Shi, Lin; Wang, Xiangdong

    2015-06-01

    The precision medicine as a new emerging area and therapeutic strategy has occurred and was practiced in the individual and brought unexpected successes, and gained high attentions from professional and social aspects as a new path to improve the treatment and prognosis of patients. There will be a number of new components to appear or be discovered, of which clinical bioinformatics integrates clinical phenotypes and informatics with bioinformatics, computational science, mathematics, and systems biology. In addition to those tools, precision medicine calls more accurate and repeatable methodologies for the identification and validation of gene discovery. Precision medicine will bring more new therapeutic strategies, drug discovery and development, and gene-oriented treatment. There is an urgent need to identify and validate disease-specific, mechanism-based, or epigenetics-dependent biomarkers to monitor precision medicine, and develop "precision" regulations to guard the application of precision medicine.

  10. High precision determination of 16O in high Tc superconductors by DIGME

    International Nuclear Information System (INIS)

    Vickridge, I.; Tallon, J.; Presland, M.

    1994-01-01

    A method is described for measuring the 16 O content of high T c superconductors with better than 1% precision by exploiting the detection of gamma rays emitted when they are irradiated by an MeV deuterium beam. The method is presently less accurate than the widely used titration and thermogravimetric methods, however it is rapid, and may be applied to materials such as Tl-containing high T c superconductors which pose serious problems for the usual analytical methods. (orig.)

  11. Acute influence of the application of strength treatment based on the combinated contrast training method on precision and velocity in overarm handball throwing

    Directory of Open Access Journals (Sweden)

    Juan S. Gómez Navarrete

    2011-01-01

    Full Text Available Abstract Combination of strengh training methods has been shown as an effective way for strengh development. This is specially indicated for improving explosive strengh and power. Our study shows the influence of combined contrast improvement method on overarm throwing in handball. Treatment consisted on one session of combined contras method. 10 handball palyers and 13 non-players participated in this estudy. The instrumental was a gun radar to know velocity throws, a camera to digitalize the accuracy, and an isometric dynamometer for strenght data collection. Results show a significant decrease in peak of force values in players group. Another significant decrease was obsesrved on integral to peak force for both groups. There are significant positive relations between throwing velocity parameters related to weight and size with isometric peak of force. We concluded that isometric time/strengh curve is an usefull instrument to observe changes produced in the subjet's capacity of producing strengh during training. Keywords: Precision, velocity, overarm handball throwing, isometric test, combined contrast method

  12. Precision measurements at a muon collider

    International Nuclear Information System (INIS)

    Dawson, S.

    1995-01-01

    We discuss the potential for making precision measurements of M W and M T at a muon collider and the motivations for each measurement. A comparison is made with the precision measurements expected at other facilities. The measurement of the top quark decay width is also discussed

  13. The forthcoming era of precision medicine.

    Science.gov (United States)

    Gamulin, Stjepan

    2016-11-01

    The aim of this essay is to present the definition and principles of personalized or precision medicine, the perspective and barriers to its development and clinical application. The implementation of precision medicine in health care requires the coordinated efforts of all health care stakeholders (the biomedical community, government, regulatory bodies, patients' groups). Particularly, translational research with the integration of genomic and comprehensive data from all levels of the organism ("big data"), development of bioinformatics platforms enabling network analysis of disease etiopathogenesis, development of a legislative framework for handling personal data, and new paradigms of medical education are necessary for successful application of the concept of precision medicine in health care. In the present and future era of precision medicine, the collaboration of all participants in health care is necessary for its realization, resulting in improvement of diagnosis, prevention and therapy, based on a holistic, individually tailored approach. Copyright © 2016 by Academy of Sciences and Arts of Bosnia and Herzegovina.

  14. Precise subtyping for synchronous multiparty sessions

    Directory of Open Access Journals (Sweden)

    Mariangiola Dezani-Ciancaglini

    2016-02-01

    Full Text Available The notion of subtyping has gained an important role both in theoretical and applicative domains: in lambda and concurrent calculi as well as in programming languages. The soundness and the completeness, together referred to as the preciseness of subtyping, can be considered from two different points of view: operational and denotational. The former preciseness has been recently developed with respect to type safety, i.e. the safe replacement of a term of a smaller type when a term of a bigger type is expected. The latter preciseness is based on the denotation of a type which is a mathematical object that describes the meaning of the type in accordance with the denotations of other expressions from the language. The result of this paper is the operational and denotational preciseness of the subtyping for a synchronous multiparty session calculus. The novelty of this paper is the introduction of characteristic global types to prove the operational completeness.

  15. The forthcoming era of precision medicine

    Directory of Open Access Journals (Sweden)

    Stjepan Gamulin

    2016-11-01

    Full Text Available Abstract. The aim of this essay is to present the definition and principles of personalized or precision medicine, the perspective and barriers to its development and clinical application. The implementation of precision medicine in health care requires the coordinated efforts of all health care stakeholders (the biomedical community, government, regulatory bodies, patients’ groups. Particularly, translational research with the integration of genomic and comprehensive data from all levels of the organism (“big data”, development of bioinformatics platforms enabling network analysis of disease etiopathogenesis, development of a legislative framework for handling personal data, and new paradigms of medical education are necessary for successful application of the concept of precision medicine in health care. Conclusion. In the present and future era of precision medicine, the collaboration of all participants in health care is necessary for its realization, resulting in improvement of diagnosis, prevention and therapy, based on a holistic, individually tailored approach.

  16. Accurate and emergent applications for high precision light small aerial remote sensing system

    Science.gov (United States)

    Pei, Liu; Yingcheng, Li; Yanli, Xue; Qingwu, Hu; Xiaofeng, Sun

    2014-03-01

    In this paper, we focus on the successful applications of accurate and emergent surveying and mapping for high precision light small aerial remote sensing system. First, the remote sensing system structure and three integrated operation modes will be introduced. It can be combined to three operation modes depending on the application requirements. Second, we describe the preliminary results of a precision validation method for POS direct orientation in 1:500 mapping. Third, it presents two fast response mapping products- regional continuous three-dimensional model and digital surface model, taking the efficiency and accuracy evaluation of the two products as an important point. The precision of both products meets the 1:2 000 topographic map accuracy specifications in Pingdingshan area. In the end, conclusions and future work are summarized.

  17. Accurate and emergent applications for high precision light small aerial remote sensing system

    International Nuclear Information System (INIS)

    Pei, Liu; Yingcheng, Li; Yanli, Xue; Xiaofeng, Sun; Qingwu, Hu

    2014-01-01

    In this paper, we focus on the successful applications of accurate and emergent surveying and mapping for high precision light small aerial remote sensing system. First, the remote sensing system structure and three integrated operation modes will be introduced. It can be combined to three operation modes depending on the application requirements. Second, we describe the preliminary results of a precision validation method for POS direct orientation in 1:500 mapping. Third, it presents two fast response mapping products- regional continuous three-dimensional model and digital surface model, taking the efficiency and accuracy evaluation of the two products as an important point. The precision of both products meets the 1:2 000 topographic map accuracy specifications in Pingdingshan area. In the end, conclusions and future work are summarized

  18. Precise pointing knowledge for SCIAMACHY solar occultation measurements

    Directory of Open Access Journals (Sweden)

    K. Bramstedt

    2012-11-01

    Full Text Available We present a method to precisely determine the viewing direction for solar occultation instruments from scans over the solar disk. Basic idea is the fit of the maximum intensity during the scan, which corresponds to the center of the solar disk in the scanning direction. We apply this method to the solar occultation measurements of the satellite instrument SCIAMACHY, which scans the Sun in elevation direction. The achieved mean precision is 0.46 mdeg, which corresponds to an tangent height error of about 26 m for individual occultation sequences. The deviation of the derived elevation angle from the geolocation information given along with the product has a seasonal cycle with an amplitude of 2.26 mdeg, which is in tangent height an amplitude of about 127 m. The mean elevation angle offset is −4.41 mdeg (249 m. SCIAMACHY's sun follower device controls the azimuth viewing direction during the occultation measurements. The derived mean azimuth direction has an standard error of 0.65 mdeg, which is about 36 m in horizontal direction at the tangent point. We observe also a seasonal cycle of the azimuth mispointing with an amplitude of 2.3 mdeg, which is slightly increasing with time. The almost constant mean offset is 88 mdeg, which is about 5.0 km horizontal offset at the tangent point.

  19. Precision siting of a particle accelerator; Locacao de precisao de um acelerador de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Cintra, Jorge Pimentel

    1996-07-01

    Precise location is a specific survey job that involves a high skilled work to avoid unrecoverable results at the project installation. As a function of the different process stages, different specifications can be applied, invoking different instruments: theodolite, measurement tape, distanciometer, invar wire. This paper, based on experience obtained at the installation of particle accelerator equipment, deals with general principles of precise location: tolerance definitions, increasing accuracy techniques, schedule of locations, sensitivity analysis, quality control methods. (author)

  20. Epistemology, Ethics, and Progress in Precision Medicine.

    Science.gov (United States)

    Hey, Spencer Phillips; Barsanti-Innes, Brianna

    2016-01-01

    The emerging paradigm of precision medicine strives to leverage the tools of molecular biology to prospectively tailor treatments to the individual patient. Fundamental to the success of this movement is the discovery and validation of "predictive biomarkers," which are properties of a patient's biological specimens that can be assayed in advance of therapy to inform the treatment decision. Unfortunately, research into biomarkers and diagnostics for precision medicine has fallen well short of expectations. In this essay, we examine the portfolio of research activities into the excision repair cross complement group 1 (ERCC1) gene as a predictive biomarker for precision lung cancer therapy as a case study in elucidating the epistemological and ethical obstacles to developing new precision medicines.

  1. Precision Clock Evaluation Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Tests and evaluates high-precision atomic clocks for spacecraft, ground, and mobile applications. Supports performance evaluation, environmental testing,...

  2. High precision ray tracing in cylindrically symmetric electrostatics

    Energy Technology Data Exchange (ETDEWEB)

    Edwards Jr, David, E-mail: dej122842@gmail.com

    2015-11-15

    Highlights: • High precision ray tracing is formulated using power series techniques. • Ray tracing is possible for fields generated by solution to laplace's equation. • Spatial and temporal orders of 4–10 are included. • Precisions in test geometries of hemispherical deflector analyzer of ∼10{sup −20} have been obtained. • This solution offers a considerable extension to the ray tracing accuracy over the current state of art. - Abstract: With the recent availability of a high order FDM solution to the curved boundary value problem, it is now possible to determine potentials in such geometries with considerably greater accuracy than had been available with the FDM method. In order for the algorithms used in the accurate potential calculations to be useful in ray tracing, an integration of those algorithms needs to be placed into the ray trace process itself. The object of this paper is to incorporate these algorithms into a solution of the equations of motion of the ray and, having done this, to demonstrate its efficacy. The algorithm incorporation has been accomplished by using power series techniques and the solution constructed has been tested by tracing the medial ray through concentric sphere geometries. The testing has indicated that precisions of ray calculations of 10{sup −20} are now possible. This solution offers a considerable extension to the ray tracing accuracy over the current state of art.

  3. Outlook for the Next Generation’s Precision Forestry in Finland

    Directory of Open Access Journals (Sweden)

    Markus Holopainen

    2014-07-01

    Full Text Available During the past decade in forest mapping and monitoring applications, the ability to acquire spatially accurate, 3D remote-sensing information by means of laser scanning, digital stereo imagery and radar imagery has been a major turning point. These 3D data sets that use single- or multi-temporal point clouds enable a wide range of applications when combined with other geoinformation and logging machine-measured data. New technologies enable precision forestry, which can be defined as a method to accurately determine characteristics of forests and treatments at stand, sub-stand or individual tree level. In precision forestry, even individual tree-level assessments can be used for simulation and optimization models of the forest management decision support system. At the moment, the forest industry in Finland is looking forward to next generation’s forest inventory techniques to improve the current wood procurement practices. Our vision is that in the future, the data solution for detailed forest management and wood procurement will be to use multi-source and -sensor information. In this communication, we review our recent findings and describe our future vision in precision forestry research in Finland.

  4. A Fast GPU-accelerated Mixed-precision Strategy for Fully NonlinearWater Wave Computations

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter; Madsen, Morten G.

    2011-01-01

    We present performance results of a mixed-precision strategy developed to improve a recently developed massively parallel GPU-accelerated tool for fast and scalable simulation of unsteady fully nonlinear free surface water waves over uneven depths (Engsig-Karup et.al. 2011). The underlying wave......-preconditioned defect correction method. The improved strategy improves the performance by exploiting architectural features of modern GPUs for mixed precision computations and is tested in a recently developed generic library for fast prototyping of PDE solvers. The new wave tool is applicable to solve and analyze...

  5. High Precision Optical Observations of Space Debris in the Geo Ring from Venezuela

    Science.gov (United States)

    Lacruz, E.; Abad, C.; Downes, J. J.; Casanova, D.; Tresaco, E.

    2018-01-01

    We present preliminary results to demonstrate that our method for detection and location of Space Debris (SD) in the geostationary Earth orbit (GEO) ring, based on observations at the OAN of Venezuela is of high astrometric precision. A detailed explanation of the method, its validation and first results is available in (Lacruz et al. 2017).

  6. Determination of the antiproton-to-electron mass ratio by precision laser spectroscopy of $\\overline{p}He^{+}$

    CERN Document Server

    Hori, M; Eades, John; Gomikawa, K; Hayano, R S; Ono, N; Pirkl, Werner; Widmann, E; Torii, H A; Juhász, B; Barna, D; Horváth, D

    2006-01-01

    A femtosecond optical frequency comb and continuous-wave pulse- amplified laser were used to measure 12 transition frequencies of antiprotonic helium to fractional precisions of (9-16) 10/sup -9lifetimes hitherto unaccessible to our precision laser spectroscopy method. Comparisons with three-body QED calculations yielded an antiproton-to-electron mass ratio of M/sub pmacron//m/sub e/=1836.152 674(5).

  7. High precision measurements of the luminosity at LEP

    International Nuclear Information System (INIS)

    Pietrzyk, B.

    1994-01-01

    The art of the luminosity measurements at LEP is presented. First generation LEP detectors have measured the absolute luminosity with the precision of 0.3-0.5%. The most precise present detectors have reached the 0.07% precision and the 0.05% is not excluded in future. Center-of-mass energy dependent relative precision of the luminosity detectors and the use of the theoretical cross-section in the LEP experiments are also discussed. (author). 18 refs., 6 figs., 6 tabs

  8. Introduction to precision machine design and error assessment

    CERN Document Server

    Mekid, Samir

    2008-01-01

    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  9. Precision measurement of electric organ discharge timing from freely moving weakly electric fish.

    Science.gov (United States)

    Jun, James J; Longtin, André; Maler, Leonard

    2012-04-01

    Physiological measurements from an unrestrained, untethered, and freely moving animal permit analyses of neural states correlated to naturalistic behaviors of interest. Precise and reliable remote measurements remain technically challenging due to animal movement, which perturbs the relative geometries between the animal and sensors. Pulse-type electric fish generate a train of discrete and stereotyped electric organ discharges (EOD) to sense their surroundings actively, and rapid modulation of the discharge rate occurs while free swimming in Gymnotus sp. The modulation of EOD rates is a useful indicator of the fish's central state such as resting, alertness, and learning associated with exploration. However, the EOD pulse waveforms remotely observed at a pair of dipole electrodes continuously vary as the fish swims relative to the electrodes, which biases the judgment of the actual pulse timing. To measure the EOD pulse timing more accurately, reliably, and noninvasively from a free-swimming fish, we propose a novel method based on the principles of waveform reshaping and spatial averaging. Our method is implemented using envelope extraction and multichannel summation, which is more precise and reliable compared with other widely used threshold- or peak-based methods according to the tests performed under various source-detector geometries. Using the same method, we constructed a real-time electronic pulse detector performing an additional online pulse discrimination routine to enhance further the detection reliability. Our stand-alone pulse detector performed with high temporal precision (<10 μs) and reliability (error <1 per 10(6) pulses) and permits longer recording duration by storing only event time stamps (4 bytes/pulse).

  10. Precision of DVC approaches for strain analysis in bone imaged with μCT at different dimensional levels.

    Science.gov (United States)

    Dall'Ara, Enrico; Peña-Fernández, Marta; Palanca, Marco; Giorgi, Mario; Cristofolini, Luca; Tozzi, Gianluca

    2017-11-01

    Accurate measurement of local strain in heterogeneous and anisotropic bone tissue is fundamental to understand the pathophysiology of musculoskeletal diseases, to evaluate the effect of interventions from preclinical studies, and to optimize the design and delivery of biomaterials. Digital volume correlation (DVC) can be used to measure the three-dimensional displacement and strain fields from micro-Computed Tomography (µCT) images of loaded specimens. However, this approach is affected by the quality of the input images, by the morphology and density of the tissue under investigation, by the correlation scheme, and by the operational parameters used in the computation. Therefore, for each application the precision of the method should be evaluated. In this paper we present the results collected from datasets analyzed in previous studies as well as new data from a recent experimental campaign for characterizing the relationship between the precision of two different DVC approaches and the spatial resolution of the outputs. Different bone structures scanned with laboratory source µCT or Synchrotron light µCT (SRµCT) were processed in zero-strain tests to evaluate the precision of the DVC methods as a function of the subvolume size that ranged from 8 to 2500 micrometers. The results confirmed that for every microstructure the precision of DVC improves for larger subvolume size, following power laws. However, for the first time large differences in the precision of both local and global DVC approaches have been highlighted when SRµCT or in vivo µCT images were used instead of conventional ex vivo µCT. These findings suggest that in situ mechanical testing protocols applied in SRµCT facilities should be optimized in order to allow DVC analyses of localized strain measurements. Moreover, for in vivo µCT applications DVC analyses should be performed only with relatively course spatial resolution for achieving a reasonable precision of the method. In conclusion

  11. Precise determination of lattice phase shifts and mixing angles

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Bing-Nan, E-mail: b.lu@fz-juelich.de [Institute for Advanced Simulation, Institut für Kernphysik, and Jülich Center for Hadron Physics, Forschungszentrum Jülich, D-52425 Jülich (Germany); Lähde, Timo A. [Institute for Advanced Simulation, Institut für Kernphysik, and Jülich Center for Hadron Physics, Forschungszentrum Jülich, D-52425 Jülich (Germany); Lee, Dean [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States); Meißner, Ulf-G. [Helmholtz-Institut für Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Universität Bonn, D-53115 Bonn (Germany); Institute for Advanced Simulation, Institut für Kernphysik, and Jülich Center for Hadron Physics, Forschungszentrum Jülich, D-52425 Jülich (Germany); JARA – High Performance Computing, Forschungszentrum Jülich, D-52425 Jülich (Germany)

    2016-09-10

    We introduce a general and accurate method for determining lattice phase shifts and mixing angles, which is applicable to arbitrary, non-cubic lattices. Our method combines angular momentum projection, spherical wall boundaries and an adjustable auxiliary potential. This allows us to construct radial lattice wave functions and to determine phase shifts at arbitrary energies. For coupled partial waves, we use a complex-valued auxiliary potential that breaks time-reversal invariance. We benchmark our method using a system of two spin-1/2 particles interacting through a finite-range potential with a strong tensor component. We are able to extract phase shifts and mixing angles for all angular momenta and energies, with precision greater than that of extant methods. We discuss a wide range of applications from nuclear lattice simulations to optical lattice experiments.

  12. Defining precision: The precision medicine initiative trials NCI-MPACT and NCI-MATCH.

    Science.gov (United States)

    Coyne, Geraldine O'Sullivan; Takebe, Naoko; Chen, Alice P

    "Precision" trials, using rationally incorporated biomarker targets and molecularly selective anticancer agents, have become of great interest to both patients and their physicians. In the endeavor to test the cornerstone premise of precision oncotherapy, that is, determining if modulating a specific molecular aberration in a patient's tumor with a correspondingly specific therapeutic agent improves clinical outcomes, the design of clinical trials with embedded genomic characterization platforms which guide therapy are an increasing challenge. The National Cancer Institute Precision Medicine Initiative is an unprecedented large interdisciplinary collaborative effort to conceptualize and test the feasibility of trials incorporating sequencing platforms and large-scale bioinformatics processing that are not currently uniformly available to patients. National Cancer Institute-Molecular Profiling-based Assignment of Cancer Therapy and National Cancer Institute-Molecular Analysis for Therapy Choice are 2 genomic to phenotypic trials under this National Cancer Institute initiative, where treatment is selected according to predetermined genetic alterations detected using next-generation sequencing technology across a broad range of tumor types. In this article, we discuss the objectives and trial designs that have enabled the public-private partnerships required to complete the scale of both trials, as well as interim trial updates and strategic considerations that have driven data analysis and targeted therapy assignment, with the intent of elucidating further the benefits of this treatment approach for patients. Copyright © 2017. Published by Elsevier Inc.

  13. More precise determination of work function based on Fermi–Dirac distribution and Fowler formula

    International Nuclear Information System (INIS)

    Changshi, Liu

    2014-01-01

    More precise numerical method to simulate current–voltage of metal at fixed temperature is presented in this paper. The new algorithm for the simulation has been developed via Fermi–Dirac distribution step by step. These calculated characteristics are shown to remain in excellent agreement with the experimental ones, taken for a range of different metals, which strong supports the validity of the model. It is also shown that based on the Fowler formula, higher precise work function can be determined.

  14. Calibration of gyro G-sensitivity coefficients with FOG monitoring on precision centrifuge

    Science.gov (United States)

    Lu, Jiazhen; Yang, Yanqiang; Li, Baoguo; Liu, Ming

    2017-07-01

    The advantages of mechanical gyros, such as high precision, endurance and reliability, make them widely used as the core parts of inertial navigation systems (INS) utilized in the fields of aeronautics, astronautics and underground exploration. In a high-g environment, the accuracy of gyros is degraded. Therefore, the calibration and compensation of the gyro G-sensitivity coefficients is essential when the INS operates in a high-g environment. A precision centrifuge with a counter-rotating platform is the typical equipment for calibrating the gyro, as it can generate large centripetal acceleration and keep the angular rate close to zero; however, its performance is seriously restricted by the angular perturbation in the high-speed rotating process. To reduce the dependence on the precision of the centrifuge and counter-rotating platform, an effective calibration method for the gyro g-sensitivity coefficients under fiber-optic gyroscope (FOG) monitoring is proposed herein. The FOG can efficiently compensate spindle error and improve the anti-interference ability. Harmonic analysis is performed for data processing. Simulations show that the gyro G-sensitivity coefficients can be efficiently estimated to up to 99% of the true value and compensated using a lookup table or fitting method. Repeated tests indicate that the G-sensitivity coefficients can be correctly calibrated when the angular rate accuracy of the precision centrifuge is as low as 0.01%. Verification tests are performed to demonstrate that the attitude errors can be decreased from 0.36° to 0.08° in 200 s. The proposed measuring technology is generally applicable in engineering, as it can reduce the accuracy requirements for the centrifuge and the environment.

  15. Precision Medicine-Nobody Is Average.

    Science.gov (United States)

    Vinks, A A

    2017-03-01

    Medicine gets personal and tailor-made treatments are underway. Hospitals have started to advertise their advanced genomic testing capabilities and even their disruptive technologies to help foster a culture of innovation. The prediction in the lay press is that in decades from now we may look back and see 2017 as the year precision medicine blossomed. It is all part of the Precision Medicine Initiative that takes into account individual differences in people's genes, environments, and lifestyles. © 2017 ASCPT.

  16. Precise delay measurement through combinatorial logic

    Science.gov (United States)

    Burke, Gary R. (Inventor); Chen, Yuan (Inventor); Sheldon, Douglas J. (Inventor)

    2010-01-01

    A high resolution circuit and method for facilitating precise measurement of on-chip delays for FPGAs for reliability studies. The circuit embeds a pulse generator on an FPGA chip having one or more groups of LUTS (the "LUT delay chain"), also on-chip. The circuit also embeds a pulse width measurement circuit on-chip, and measures the duration of the generated pulse through the delay chain. The pulse width of the output pulse represents the delay through the delay chain without any I/O delay. The pulse width measurement circuit uses an additional asynchronous clock autonomous from the main clock and the FPGA propagation delay can be displayed on a hex display continuously for testing purposes.

  17. Atmospheric Attenuation Correction Based on a Constant Reference for High-Precision Infrared Radiometry

    Directory of Open Access Journals (Sweden)

    Zhiguo Huang

    2017-11-01

    Full Text Available Infrared (IR radiometry technology is an important method for characterizing the IR signature of targets, such as aircrafts or rockets. However, the received signal of targets could be reduced by a combination of atmospheric molecule absorption and aerosol scattering. Therefore, atmospheric correction is a requisite step for obtaining the real radiance of targets. Conventionally, the atmospheric transmittance and the air path radiance are calculated by an atmospheric radiative transfer calculation software. In this paper, an improved IR radiometric method based on constant reference correction of atmospheric attenuation is proposed. The basic principle and procedure of this method are introduced, and then the linear model of high-speed calibration in consideration of the integration time is employed and confirmed, which is then applicable in various complex conditions. To eliminate stochastic errors, radiometric experiments were conducted for multiple integration times. Finally, several experiments were performed on a mid-wave IR system with Φ600 mm aperture. The radiometry results indicate that the radiation inversion precision of the novel method is 4.78–4.89%, while the precision of the conventional method is 10.86–13.81%.

  18. A passion for precision

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  19. Development of precision elliptic neutron-focusing supermirror.

    Science.gov (United States)

    Hosobata, Takuya; Yamada, Norifumi L; Hino, Masahiro; Yamagata, Yutaka; Kawai, Toshihide; Yoshinaga, Hisao; Hori, Koichiro; Takeda, Masahiro; Takeda, Shin; Morita, Shin-Ya

    2017-08-21

    This paper details methods for the precision design and fabrication of neutron-focusing supermirrors, based on electroless nickel plating. We fabricated an elliptic mirror for neutron reflectometry, which is our second mirror improved from the first. The mirror is a 550-millimeter-long segmented mirror assembled using kinematic couplings, with each segment figured by diamond cutting, polished using colloidal silica, and supermirror coated through ion-beam sputtering. The mirror was evaluated with neutron beams, and the reflectivity was found to be 68-90% at a critical angle. The focusing width was 0.17 mm at the full width at half maximum.

  20. Precise measurement of muon momenta at LEP using the L3 detector

    International Nuclear Information System (INIS)

    Gonzalez Romero, E.M.

    1990-01-01

    In this PhD report the author presents the studies and methods developed to achieve the optimization of the resolution in the momentum measurement of the L3 moun detector. Chapters 1 and 2 show the motivations to build a precision muon detector for the LEP e + e - collider. Special emphasis is applied to the study of the Higgs scalar boson search and identification and the guiding principles used to design the L3 muon detector are outlined. Chapter 3 is devoted to the description of the drift chambers. They are located in three concentric octagonal cylinders inside one solenoidal magnet, around the interaction point and coaxial with the beams. These chambers are the measuring elements of the detector. The chapter includes the description or the different tests applied to the chambers to obtain their resolution and calibration. In chapter 4 the alignment system of this chambers is described. This system is a key element to the precision of the detector, that being 12 meters long and of 12 meters of diameter has to measure the particles trajectories with precisions of just a few micrometers. Chapter 5 describes the third key piece for the detector precision, the monitoring and control system. It allows to know continually the precise values of the critical parameters of the detector. Finally in chapter 6 the author presents the results of the many test applied to the detector using cosmic rays, UV lasers and even the actual muons produced in the e + e - interactions. These tests prove that the L3 muon detector is the most precise measuring system for muon momenta installed at present in one e + e - collider ring. (Author)

  1. A precise time synchronization method for 5G based on radio-over-fiber network with SDN controller

    Science.gov (United States)

    He, Linkuan; Wei, Baoguo; Yang, Hui; Yu, Ao; Wang, Zhengyong; Zhang, Jie

    2018-02-01

    There is an increasing demand on accurate time synchronization with the growing bandwidth of network service for 5G. In 5G network, it's necessary for base station to achieve accurate time synchronization to guarantee the quality of communication. In order to keep accuracy time for 5G network, we propose a time synchronization system for satellite ground station based on radio-over-fiber network (RoFN) with software defined optical network (SDON) controller. The advantage of this method is to improve the accuracy of time synchronization of ground station. The IEEE 1588 time synchronization protocol can solve the problems of high cost and lack of precision. However, in the process of time synchronization, distortion exists during the transmission of digital time signal. RoF uses analog optical transmission links and therefore analog transmission can be implemented among ground stations instead of digital transmission, which means distortion and bandwidth waste in the process of digital synchronization can be avoided. Additionally, the thought of SDN, software defined network, can optimize RoFN with centralized control and simplifying base station. Related simulation had been carried out to prove its superiority.

  2. Problems, challenges and promises: perspectives on precision medicine.

    Science.gov (United States)

    Duffy, David J

    2016-05-01

    The 'precision medicine (systems medicine)' concept promises to achieve a shift to future healthcare systems with a more proactive and predictive approach to medicine, where the emphasis is on disease prevention rather than the treatment of symptoms. The individualization of treatment for each patient will be at the centre of this approach, with all of a patient's medical data being computationally integrated and accessible. Precision medicine is being rapidly embraced by biomedical researchers, pioneering clinicians and scientific funding programmes in both the European Union (EU) and USA. Precision medicine is a key component of both Horizon 2020 (the EU Framework Programme for Research and Innovation) and the White House's Precision Medicine Initiative. Precision medicine promises to revolutionize patient care and treatment decisions. However, the participants in precision medicine are faced with a considerable central challenge. Greater volumes of data from a wider variety of sources are being generated and analysed than ever before; yet, this heterogeneous information must be integrated and incorporated into personalized predictive models, the output of which must be intelligible to non-computationally trained clinicians. Drawing primarily from the field of 'oncology', this article will introduce key concepts and challenges of precision medicine and some of the approaches currently being implemented to overcome these challenges. Finally, this article also covers the criticisms of precision medicine overpromising on its potential to transform patient care. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. Pb and Sr isotope measurements by inductively coupled plasma mass spectrometer: efficient time management for precision improvement

    Science.gov (United States)

    Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.

    1998-08-01

    One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.

  4. Knowledge of Precision Farming Beneficiaries

    Directory of Open Access Journals (Sweden)

    A.V. Greena

    2016-05-01

    Full Text Available Precision Farming is one of the many advanced farming practices that make production more efficient by better resource management and reducing wastage. TN-IAMWARM is a world bank funded project aims to improve the farm productivity and income through better water management. The present study was carried out in Kambainallur sub basin of Dharmapuri district with 120 TN-IAMWARM beneficiaries as respondents. The result indicated that more than three fourth (76.67 % of the respondents had high level of knowledge on precision farming technologies which was made possible by the implementation of TN-IAMWARM project. The study further revealed that educational status, occupational status and exposure to agricultural messages had a positive and significant contribution to the knowledge level of the respondents at 0.01 level of probability whereas experience in precision farming and social participation had a positive and significant contribution at 0.05 level of probability.

  5. Ageing influence for the evaluation of DXA precision in female subjects

    International Nuclear Information System (INIS)

    Lin Qiang; Yu Wei; Qin Mingwei; Shang Wei; Tian Junping; Han Shaomei

    2006-01-01

    Objective: To investigate whether aging factor influence the precision of DXA measurement at the lumbar spine in females. Methods: A total of 90 female subjects were recruited and divided into three age groups, i.e. 45-55 years, 56-65 years and 66-75 years. There were 30 female subjects for each age group. Each subject was scanned twice at the same day. Mean BMD values from L2 to L4 were collected and grouped by calculating the root mean square (RMS). Precision errors were expressed as root mean square (RMS). P 2 , (0.992±0.010) g/cm 2 , (0.910±0.010) g/cm 2 , respectively. Mean BMD values from L2 to L4 decreased with increasing age group. Root mean square was lower in the 45 -55 age group, and was same between 56-65 and 66-75 age group. There were significant difference of BMD standard deviation between both there groups (F=5.213, P<0.05) any age group (q value I vs II 0.035; II vs III 0.500; I vs III 0.035, P<0.05). Conclusion: Age could influence the precision of DXA measurement at the site of lumbar spine in females. Therefore, caution should be paid to the age of female subjects recruited for the evaluation of precision for DXA measurement in the clinical trials. (authors)

  6. Towards Precision Medicine in the Clinic: From Biomarker Discovery to Novel Therapeutics.

    Science.gov (United States)

    Collins, Dearbhaile C; Sundar, Raghav; Lim, Joline S J; Yap, Timothy A

    2017-01-01

    Precision medicine continues to be the benchmark to which we strive in cancer research. Seeking out actionable aberrations that can be selectively targeted by drug compounds promises to optimize treatment efficacy and minimize toxicity. Utilizing these different targeted agents in combination or in sequence may further delay resistance to treatments and prolong antitumor responses. Remarkable progress in the field of immunotherapy adds another layer of complexity to the management of cancer patients. Corresponding advances in companion biomarker development, novel methods of serial tumor assessments, and innovative trial designs act synergistically to further precision medicine. Ongoing hurdles such as clonal evolution, intra- and intertumor heterogeneity, and varied mechanisms of drug resistance continue to be challenges to overcome. Large-scale data-sharing and collaborative networks using next-generation sequencing (NGS) platforms promise to take us further into the cancer 'ome' than ever before, with the goal of achieving successful precision medicine. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Conducting Precision Medicine Research with African Americans.

    Science.gov (United States)

    Halbert, Chanita Hughes; McDonald, Jasmine; Vadaparampil, Susan; Rice, LaShanta; Jefferson, Melanie

    2016-01-01

    Precision medicine is an approach to detecting, treating, and managing disease that is based on individual variation in genetic, environmental, and lifestyle factors. Precision medicine is expected to reduce health disparities, but this will be possible only if studies have adequate representation of racial minorities. It is critical to anticipate the rates at which individuals from diverse populations are likely to participate in precision medicine studies as research initiatives are being developed. We evaluated the likelihood of participating in a clinical study for precision medicine. Observational study conducted between October 2010 and February 2011 in a national sample of African Americans. Intentions to participate in a government sponsored study that involves providing a biospecimen and generates data that could be shared with other researchers to conduct future studies. One third of respondents would participate in a clinical study for precision medicine. Only gender had a significant independent association with participation intentions. Men had a 1.86 (95% CI = 1.11, 3.12, p = 0.02) increased likelihood of participating in a precision medicine study compared to women in the model that included overall barriers and facilitators. In the model with specific participation barriers, distrust was associated with a reduced likelihood of participating in the research described in the vignette (OR = 0.57, 95% CI = 0.34, 0.96, p = 0.04). African Americans may have low enrollment in PMI research. As PMI research is implemented, extensive efforts will be needed to ensure adequate representation. Additional research is needed to identify optimal ways of ethically describing precision medicine studies to ensure sufficient recruitment of racial minorities.

  8. Precision casting into disposable ceramic mold – a high efficiency method of production of castings of irregular shape

    OpenAIRE

    Уваров, Б. И.; Лущик, П. Е.; Андриц, А. А.; Долгий, Л. П.; Заблоцкий, А. В.

    2016-01-01

    The article shows the advantages and disadvantages of precision casting into disposable ceramic molds. The high quality shaped castings produced by modernized ceramic molding process are proved the reliability and prospects of this advanced technology.

  9. Direct and precise measurement of displacement and velocity of flexible web in roll-to-roll manufacturing systems

    International Nuclear Information System (INIS)

    Kang, Dongwoo; Lee, Eonseok; Choi, Young-Man; Lee, Taik-Min; Kim, Duk Young; Kim, Dongmin

    2013-01-01

    Interest in the production of printed electronics using a roll-to-roll system has gradually increased due to its low mass-production costs and compatibility with flexible substrate. To improve the accuracy of roll-to-roll manufacturing systems, the movement of the web needs to be measured precisely in advance. In this paper, a novel measurement method is developed to measure the displacement and velocity of the web precisely and directly. The proposed algorithm is based on the traditional single field encoder principle, and the scale grating has been replaced with a printed grating on the web. Because a printed grating cannot be as accurate as a scale grating in a traditional encoder, there will inevitably be variations in pitch and line-width, and the motion of the web should be measured even though there are variations in pitch and line-width in the printed grating patterns. For this reason, the developed algorithm includes a precise method of estimating the variations in pitch. In addtion, a method of correcting the Lissajous curve is presented for precision phase interpolation to improve measurement accuracy by correcting Lissajous circle to unit circle. The performance of the developed method is evaluated by simulation and experiment. In the experiment, the displacement error was less than 2.5 μm and the velocity error of 1σ was about 0.25%, while the grating scale moved 30 mm

  10. Direct and precise measurement of displacement and velocity of flexible web in roll-to-roll manufacturing systems

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dongwoo; Lee, Eonseok; Choi, Young-Man; Lee, Taik-Min [Advanced Manufacturing Systems Research Division, Korea Institute of Machinery and Materials, 156 Gajeongbuk-Ro, Yuseong-Gu, Daejeon 305-343 (Korea, Republic of); Kim, Duk Young [Nano-Opto-Mechatronics Lab., Dept. of Mechanical Eng., KAIST, 335 Gwahangno, Yuseong-Gu, Daejeon 305-701 (Korea, Republic of); Kim, Dongmin [Korea Research Institute of Standards and Science, 267 Gajeong-Ro, Yuseong-Gu, Daejeon 305-340 (Korea, Republic of)

    2013-12-15

    Interest in the production of printed electronics using a roll-to-roll system has gradually increased due to its low mass-production costs and compatibility with flexible substrate. To improve the accuracy of roll-to-roll manufacturing systems, the movement of the web needs to be measured precisely in advance. In this paper, a novel measurement method is developed to measure the displacement and velocity of the web precisely and directly. The proposed algorithm is based on the traditional single field encoder principle, and the scale grating has been replaced with a printed grating on the web. Because a printed grating cannot be as accurate as a scale grating in a traditional encoder, there will inevitably be variations in pitch and line-width, and the motion of the web should be measured even though there are variations in pitch and line-width in the printed grating patterns. For this reason, the developed algorithm includes a precise method of estimating the variations in pitch. In addtion, a method of correcting the Lissajous curve is presented for precision phase interpolation to improve measurement accuracy by correcting Lissajous circle to unit circle. The performance of the developed method is evaluated by simulation and experiment. In the experiment, the displacement error was less than 2.5 μm and the velocity error of 1σ was about 0.25%, while the grating scale moved 30 mm.

  11. Precision translator

    Science.gov (United States)

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  12. Precision improvement of frequency-modulated continuous-wave laser ranging system with two auxiliary interferometers

    Science.gov (United States)

    Shi, Guang; Wang, Wen; Zhang, Fumin

    2018-03-01

    The measurement precision of frequency-modulated continuous-wave (FMCW) laser distance measurement should be proportional to the scanning range of the tunable laser. However, the commercial external cavity diode laser (ECDL) is not an ideal tunable laser source in practical applications. Due to the unavoidable mode hopping and scanning nonlinearity of the ECDL, the measurement precision of FMCW laser distance measurements can be substantially affected. Therefore, an FMCW laser ranging system with two auxiliary interferometers is proposed in this paper. Moreover, to eliminate the effects of ECDL, the frequency-sampling method and mode hopping influence suppression method are employed. Compared with a fringe counting interferometer, this FMCW laser ranging system has a measuring error of ± 20 μm at the distance of 5.8 m.

  13. High-speed precision motion control

    CERN Document Server

    Yamaguchi, Takashi; Pang, Chee Khiang

    2011-01-01

    Written for researchers and postgraduate students in Control Engineering, as well as professionals in the Hard Disk Drive industry, this book discusses high-precision and fast servo controls in Hard Disk Drives (HDDs). The editors present a number of control algorithms that enable fast seeking and high precision positioning, and propose problems from commercial products, making the book valuable to researchers in HDDs. Each chapter is self contained, and progresses from concept to technique, present application examples that can be used within automotive, aerospace, aeronautical, and manufactu

  14. Application of the FW-CADIS variance reduction method to calculate a precise N-flux distribution for the FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J.

    2014-01-01

    For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.

  15. Application of the FW-CADIS variance reduction method to calculate a precise N-flux distribution for the FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Engineering and Technology

    2014-11-15

    For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.

  16. The Validation of NAA Method Used as Test Method in Serpong NAA Laboratory

    International Nuclear Information System (INIS)

    Rina-Mulyaningsih, Th.

    2004-01-01

    The Validation Of NAA Method Used As Test Method In Serpong NAA Laboratory. NAA Method is a non standard testing method. The testing laboratory shall validate its using method to ensure and confirm that it is suitable with application. The validation of NAA methods have been done with the parameters of accuracy, precision, repeatability and selectivity. The NIST 1573a Tomato Leaves, NIES 10C Rice flour unpolished and standard elements were used in this testing program. The result of testing with NIST 1573a showed that the elements of Na, Zn, Al and Mn are met from acceptance criteria of accuracy and precision, whereas Co is rejected. The result of testing with NIES 10C showed that Na and Zn elements are met from acceptance criteria of accuracy and precision, but Mn element is rejected. The result of selectivity test showed that the value of quantity is between 0.1-2.5 μg, depend on the elements. (author)

  17. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  18. New precise astrometric observations of Nereid in 2012-2017

    Science.gov (United States)

    Yu, Y.; Qiao, R. C.; Yan, D.; Cheng, X.; Xi, X. J.; Tang, K.; Luo, H.

    2018-03-01

    Nereid is one of the most distinctive natural satellites that we know in the Solar system. The orbit of Nereid is highly eccentric and inclined with respect to the equator of its primary, Neptune. Studying Nereid is one of the inspiring ways to acquire better knowledge of the Solar system. Due to its faintness, the ground-based observations of Nereid have been limited and the observation precisions in the past were generally not high. A total of 150 new observed positions of Nereid in the period 2012-2017 were collected by the 0.8 m reflecting telescope at Xinglong station of National Astronomical Observatory and the 2.4 m reflecting telescope at Lijiang station of Yunnan Astronomical Observatory. Thanks to the high-quality reference catalogue Gaia DR1 and suitable processing methods for images, the precision of our new observations of Nereid is 2-3 times higher than those of the previous observations, and the dispersions of our observations are better than 70 mas.

  19. Precision radiocarbon dating of a Late Holocene vegetation history

    International Nuclear Information System (INIS)

    Prior, C.A.; Chester, P.I.

    2001-01-01

    The purpose of this research is to precisely date vegetation changes associated with early human presence in the Hawkes Bay region. A sequence of AMS radiocarbon ages was obtained using a new technique developed at Rafter Radiocarbon Laboratory. A density separation method was used to concentrate pollen and spores extracted from unconsolidated lake sediments from a small-enclosed lake in coastal foothills of southern Hawkes Bay. Radiocarbon measurements were made on fractions of concentrated pollen, separated from associated organic debris. These ages directly date vegetation communities used to reconstruct the vegetation history of the region. This technique results in more accurate dating of Late Holocene vegetation changes interpreted from palynological analyses than techniques formerly used. Precision dating of palynological studies of New Zealand prehistory and history is necessary for correlation of vegetation changes to cultural changes because of the short time span of human occupation of New Zealand. (author). 35 refs., 3 figs., 1 tab

  20. Precision study of the $\\beta$-decay of $^{74}$Rb

    CERN Multimedia

    Van Duppen, P L E; Lunney, D

    2002-01-01

    We are proposing a high-resolution study of the $\\beta$-decay of $^{74}$Rb in order to extrapolate our precision knowledge of the superallowed $\\beta$-decays from the sd and fp shells towards the medium-heavy Z=N nuclei. The primary goal is to provide new data for testing the CVC hypothesis and the unitarity condition of the CKM matrix of the Standard Model. The presented programme would involve the careful measurements of the decay properties of $^{74}$Rb including the branching ratios to the excited states as well as the precise determination of the decay energy of $^{74}$Rb. The experimental methods readily available at ISOLDE include high-transmission conversion electron spectroscopy, $\\gamma$-ray spectroscopy as well as the measurements of the masses of $^{74}$Rb and $^{74}$Kr using two complementary techniques, ISOLTRAP and MISTRAL. The experiment would rely on a high-quality $^{74}$Rb beam available at ISOLDE with adequate intensity.