WorldWideScience

Sample records for method precisions rsdr

  1. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  2. Analysis of Precision of Activation Analysis Method

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Nørgaard, K.

    1973-01-01

    The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T...

  3. Subdomain Precise Integration Method for Periodic Structures

    Directory of Open Access Journals (Sweden)

    F. Wu

    2014-01-01

    Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

  4. New methods for precision Moeller polarimetry*

    International Nuclear Information System (INIS)

    Gaskell, D.; Meekins, D.G.; Yan, C.

    2007-01-01

    Precision electron beam polarimetry is becoming increasingly important as parity violation experiments attempt to probe the frontiers of the standard model. In the few GeV regime, Moeller polarimetry is well suited to high-precision measurements, however is generally limited to use at relatively low beam currents (<10 μA). We present a novel technique that will enable precision Moeller polarimetry at very large currents, up to 100 μA. (orig.)

  5. Development and Evaluation of Event-Specific Quantitative PCR Method for Genetically Modified Soybean MON87701.

    Science.gov (United States)

    Tsukahara, Keita; Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Nishimaki-Mogami, Tomoko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event, MON87701. First, a standard plasmid for MON87701 quantification was constructed. The conversion factor (C f ) required to calculate the amount of genetically modified organism (GMO) was experimentally determined for a real-time PCR instrument. The determined C f for the real-time PCR instrument was 1.24. For the evaluation of the developed method, a blind test was carried out in an inter-laboratory trial. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr), respectively. The determined biases and the RSDr values were less than 30 and 13%, respectively, at all evaluated concentrations. The limit of quantitation of the method was 0.5%, and the developed method would thus be applicable for practical analyses for the detection and quantification of MON87701.

  6. A method for consistent precision radiation therapy

    International Nuclear Information System (INIS)

    Leong, J.

    1985-01-01

    Using a meticulous setup procedure in which repeated portal films were taken before each treatment until satisfactory portal verifications were obtained, a high degree of precision in patient positioning was achieved. A fluctuation from treatment to treatment, over 11 treatments, of less than +-0.10 cm (S.D.) for anatomical points inside the treatment field was obtained. This, however, only applies to specific anatomical points selected for this positioning procedure and does not apply to all points within the portal. We have generalized this procedure and have suggested a means by which any target volume can be consistently positioned which may approach this degree of precision. (orig.)

  7. Development and interlaboratory validation of quantitative polymerase chain reaction method for screening analysis of genetically modified soybeans.

    Science.gov (United States)

    Takabatake, Reona; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2013-01-01

    A novel real-time polymerase chain reaction (PCR)-based quantitative screening method was developed for three genetically modified soybeans: RRS, A2704-12, and MON89788. The 35S promoter (P35S) of cauliflower mosaic virus is introduced into RRS and A2704-12 but not MON89788. We then designed a screening method comprised of the combination of the quantification of P35S and the event-specific quantification of MON89788. The conversion factor (Cf) required to convert the amount of a genetically modified organism (GMO) from a copy number ratio to a weight ratio was determined experimentally. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDR), respectively. The determined RSDR values for the method were less than 25% for both targets. We consider that the developed method would be suitable for the simple detection and approximate quantification of GMO.

  8. Method for obtaining more precise measures of excreted organic carbon

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    A new method for concentrating and measuring excreted organic carbon by lyophilization and scintillation counting is efficient, improves measurable radioactivity, and increases precision for estimates of organic carbon excreted by phytoplankton and macrophytes

  9. Integrative methods for analyzing big data in precision medicine.

    Science.gov (United States)

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Precision die design by the die expansion method

    CERN Document Server

    Ibhadode, A O Akii

    2009-01-01

    This book presents a new method for the design of the precision dies used in cold-forging, extrusion and drawing processes. The method is based upon die expansion, and attempts to provide a clear-cut theoretical basis for the selection of critical die dimensions for this group of precision dies when the tolerance on product diameter (or thickness) is specified. It also presents a procedure for selecting the minimum-production-cost die from among a set of design alternatives. The mathematical content of the book is relatively simple and will present no difficulty to those who have taken basic c

  11. Precise charge density studies by maximum entropy method

    CERN Document Server

    Takata, M

    2003-01-01

    For the production research and development of nanomaterials, their structural information is indispensable. Recently, a sophisticated analytical method, which is based on information theory, the Maximum Entropy Method (MEM) using synchrotron radiation powder data, has been successfully applied to determine precise charge densities of metallofullerenes and nanochannel microporous compounds. The results revealed various endohedral natures of metallofullerenes and one-dimensional array formation of adsorbed gas molecules in nanochannel microporous compounds. The concept of MEM analysis was also described briefly. (author)

  12. The various correction methods to the high precision aeromagnetic data

    International Nuclear Information System (INIS)

    Xu Guocang; Zhu Lin; Ning Yuanli; Meng Xiangbao; Zhang Hongjian

    2014-01-01

    In the airborne geophysical survey, an outstanding achievement first depends on the measurement precision of the instrument, and the choice of measurement conditions, the reliability of data collection, followed by the correct method of measurement data processing, the rationality of the data interpretation. Obviously, geophysical data processing is an important task for the comprehensive interpretation of the measurement results, processing method is correct or not directly related to the quality of the final results. we have developed a set of personal computer software to aeromagnetic and radiometric survey data processing in the process of actual production and scientific research in recent years, and successfully applied to the production. The processing methods and flowcharts to the high precision aromagnetic data were simply introduced in this paper. However, the mathematical techniques of the various correction programes to IGRF and flying height and magnetic diurnal variation were stressily discussed in the paper. Their processing effectness were illustrated by taking an example as well. (authors)

  13. Advanced methods and algorithm for high precision astronomical imaging

    International Nuclear Information System (INIS)

    Ngole-Mboula, Fred-Maurice

    2016-01-01

    One of the biggest challenges of modern cosmology is to gain a more precise knowledge of the dark energy and the dark matter nature. Fortunately, the dark matter can be traced directly through its gravitational effect on galaxies shapes. The European Spatial Agency Euclid mission will precisely provide data for such a purpose. A critical step is analyzing these data will be to accurately model the instrument Point Spread Function (PSF), which the focus of this thesis.We developed non parametric methods to reliably estimate the PSFs across an instrument field-of-view, based on unresolved stars images and accounting for noise, under sampling and PSFs spatial variability. At the core of these contributions, modern mathematical tools and concepts such as sparsity. An important extension of this work will be to account for the PSFs wavelength dependency. (author) [fr

  14. Precision profiles and analytic reliability of radioimmunologic methods

    International Nuclear Information System (INIS)

    Yaneva, Z.; Popova, Yu.

    1991-01-01

    The aim of the present study is to investigate and compare some methods for creation of 'precision profiles' (PP) and to clarify their possibilities for determining the analytical reliability of RIA. Only methods without complicated mathematical calculations has been used. The reproducibility in serums with a concentration of the determinable hormone in the whole range of the calibration curve has been studied. The radioimmunoassay has been performed with TSH-RIA set (ex East Germany), and comparative evaluations - with commercial sets of HOECHST (Germany) and AMERSHAM (GB). Three methods for obtaining the relationship concentration (IU/l) -reproducibility (C.V.,%) are used and a comparison is made of their corresponding profiles: preliminary rough profile, Rodbard-PP and Ekins-PP. It is concluded that the creation of a precision profile is obligatory and the method of its construction does not influence the relationship's course. PP allows to determine concentration range giving stable results which improves the efficiency of the analitical work. 16 refs., 4 figs

  15. Biotechnological Methods for Precise Diagnosis of Methicillin Resistance in Staphylococci

    Directory of Open Access Journals (Sweden)

    Aija Zilevica

    2005-04-01

    Full Text Available Antimicrobial resistance is one of the most urgent problems in medicine nowadays. The purpose of the study was to investigate the microorganisms resistant to first-line antimicrobials, including gram-positive cocci, particularly the methicillin-resistant Staphylococcus aureus and coagulase-negative Staphylococci, the major agents of nosocomial infections. Owing to the multi-resistance of these agents, precise diagnosis of the methicillin resistance of Staphylococci is of greatest clinical importance. It is not enough to use only conventional microbiological diagnostic methods. Biotechnological methods should be also involved. In our studies, the following methicillin resistance identification methods were used: the disk diffusion method, detection of the mecA gene by PCR, E-test and Slidex MRSA test. For molecular typing, PFGL, RAPD tests and detection of the coa gene were used. All the MRS strains were multiresistant to antibacterials. No vancomycine resistance was registered.

  16. A high precision method for normalization of cross sections

    International Nuclear Information System (INIS)

    Aguilera R, E.F.; Vega C, J.J.; Martinez Q, E.; Kolata, J.J.

    1988-08-01

    It was developed a system of 4 monitors and a program to eliminate, in the process of normalization of cross sections, the dependence of the alignment of the equipment and those condition of having centered of the beam. It was carried out a series of experiments with the systems 27 Al + 70, 72, 74, 76 Ge, 35 Cl + 58 Ni, 37 Cl + 58, 60, 62, 64 Ni and ( 81 Br, 109 Rh) + 60 Ni. For these experiments the typical precision of 1% was obtained in the normalization. It is demonstrated theoretical and experimentally the advantage of this method on those that use 1 or 2 monitors. (Author)

  17. a High Precision dem Extraction Method Based on Insar Data

    Science.gov (United States)

    Wang, Xinshuang; Liu, Lingling; Shi, Xiaoliang; Huang, Xitao; Geng, Wei

    2018-04-01

    In the 13th Five-Year Plan for Geoinformatics Business, it is proposed that the new InSAR technology should be applied to surveying and mapping production, which will become the innovation driving force of geoinformatics industry. This paper will study closely around the new outline of surveying and mapping and then achieve the TerraSAR/TanDEM data of Bin County in Shaanxi Province in X band. The studying steps are as follows; Firstly, the baseline is estimated from the orbital data; Secondly, the interferometric pairs of SAR image are accurately registered; Thirdly, the interferogram is generated; Fourth, the interferometric correlation information is estimated and the flat-earth phase is removed. In order to solve the phase noise and the discontinuity phase existing in the interferometric image of phase, a GAMMA adaptive filtering method is adopted. Aiming at the "hole" problem of missing data in low coherent area, the interpolation method of low coherent area mask is used to assist the phase unwrapping. Then, the accuracy of the interferometric baseline is estimated from the ground control points. Finally, 1 : 50000 DEM is generated, and the existing DEM data is used to verify the accuracy through statistical analysis. The research results show that the improved InSAR data processing method in this paper can obtain the high-precision DEM of the study area, exactly the same with the topography of reference DEM. The R2 can reach to 0.9648, showing a strong positive correlation.

  18. System and method for high precision isotope ratio destructive analysis

    Science.gov (United States)

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  19. Solution Method and Precision Analysis of Double-difference Dynamic Precise Orbit Determination of BeiDou Navigation Satellite System

    Directory of Open Access Journals (Sweden)

    LIU Weiping

    2016-02-01

    Full Text Available To resolve the high relativity between the transverse element of GEO orbit and double-difference ambiguity, the classical double-difference dynamic method is improved and the method, which is to determine precise BeiDou satellite orbit using carrier phase and pseudo-range smoothed by phase, is proposed. The feasibility of the method is discussed and the influence of the method about ambiguity fixing is analyzed. Considering the characteristic of BeiDou, the method, which is to fix double-difference ambiguity of BeiDou satellites by QIF, is derived. The real data analysis shows that the new method, which can reduce the relativity and assure the precision, is better than the classical double-difference dynamic method. The result of ambiguity fixing is well by QIF, but the ambiguity fixing success rate is not high on the whole. So the precision of BeiDou orbit can't be improved clearly after ambiguity fixing.

  20. Cliché fabrication method using precise roll printing process with 5 um pattern width

    Science.gov (United States)

    Shin, Yejin; Kim, Inyoung; Oh, Dong-Ho; Lee, Taik-Min

    2016-09-01

    Among the printing processes for printed electronic devices, gravure offset and reverse offset method have drawn attention for its fine pattern printing possibility. These printing methods use cliché, which has critical effect on the final product precision and quality. In this research, a novel precise cliché replica method is proposed. It consists of copper sputtering, precise mask pattern printing with 5 um width using reverse offset printing, Ni electroplating, lift-off, etching, and DLC coating. We finally compare the fabricated replica cliché with the original one and print out precise patterns using the replica cliché.

  1. Evaluating the precision of passive sampling methods using ...

    Science.gov (United States)

    To assess these models, four different thicknesses of low-density polyethylene (LDPE) passive samplers were co-deployed for 28 days in the water column at three sites in New Bedford Harbor, MA, USA. Each sampler was pre-loaded with six PCB performance reference compounds (PRCs) to assess equilibrium status, such that the percent of PRC lost would range depending on PRC and LDPE thickness. These data allow subsequent Cfree comparisons to be made in two ways: (1) comparing Cfree derived from one thickness using different models and (2) comparing Cfree derived from the same model using different thicknesses of LDPE. Following the deployments, the percent of PRC lost ranged from 0-100%. As expected, fractional equilibrium decreased with increasing PRC molecular weight as well as sampler thickness. Overall, a total of 27 PCBs (log KOW ranging from 5.07 – 8.09) were measured at Cfree concentrations varying from 0.05 pg/L (PCB 206) to about 200 ng/L (PCB 28) on a single LDPE sampler. Relative standard deviations (RSDs) for total PCB measurements using the same thickness and varying model types range from 0.04-12% and increased with sampler thickness. Total PCB RSD for measurements using the same model and varying thickness ranged from: 6 – 30%. No RSD trends between models were observed but RSD did increase as Cfree decreased. These findings indicate that existing models yield precise and reproducible results when using LDPE and PRCs to measure Cfree. This work in

  2. Development and validation of an event-specific quantitative PCR method for genetically modified maize MIR162.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2014-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize event, MIR162. We first prepared a standard plasmid for MIR162 quantification. The conversion factor (Cf) required to calculate the genetically modified organism (GMO) amount was empirically determined for two real-time PCR instruments, the Applied Biosystems 7900HT (ABI7900) and the Applied Biosystems 7500 (ABI7500) for which the determined Cf values were 0.697 and 0.635, respectively. To validate the developed method, a blind test was carried out in an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr). The determined biases were less than 25% and the RSDr values were less than 20% at all evaluated concentrations. These results suggested that the limit of quantitation of the method was 0.5%, and that the developed method would thus be suitable for practical analyses for the detection and quantification of MIR162.

  3. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  4. Precision lifetime measurements using the recoil distance method

    International Nuclear Information System (INIS)

    Kruecken, R.

    2000-01-01

    The recoil distance method (RDM) for the measurements of lifetimes of excited nuclear levels in the range from about 1 ps to 1,000 ps is reviewed. The New Yale Plunger Device for RDM experiments is introduced and the Differential Decay Curve Method for their analysis is reviewed. Results from recent RDM experiments on SD bands in the mass-190 region, shears bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. Perspectives for the use of RDM measurements in the study of neutron-rich nuclei are discussed

  5. Precision Lifetime Measurements Using the Recoil Distance Method

    Science.gov (United States)

    Krücken, R.

    2000-01-01

    The recoil distance method (RDM) for the measurements of lifetimes of excited nuclear levels in the range from about 1 ps to 1000 ps is reviewed. The New Yale Plunger Device for RDM experiments is introduced and the Differential Decay Curve Method for their analysis is reviewed. Results from recent RDM experiments on SD bands in the mass-190 region, shears bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. Perspectives for the use of RDM measurements in the study of neutron-rich nuclei are discussed. PMID:27551587

  6. Interlaboratory validation of quantitative duplex real-time PCR method for screening analysis of genetically modified maize.

    Science.gov (United States)

    Takabatake, Reona; Koiwa, Tomohiro; Kasahara, Masaki; Takashima, Kaori; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Oguchi, Taichi; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2011-01-01

    To reduce the cost and time required to routinely perform the genetically modified organism (GMO) test, we developed a duplex quantitative real-time PCR method for a screening analysis simultaneously targeting an event-specific segment for GA21 and Cauliflower Mosaic Virus 35S promoter (P35S) segment [Oguchi et al., J. Food Hyg. Soc. Japan, 50, 117-125 (2009)]. To confirm the validity of the method, an interlaboratory collaborative study was conducted. In the collaborative study, conversion factors (Cfs), which are required to calculate the GMO amount (%), were first determined for two real-time PCR instruments, the ABI PRISM 7900HT and the ABI PRISM 7500. A blind test was then conducted. The limit of quantitation for both GA21 and P35S was estimated to be 0.5% or less. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSD(R)). The determined bias and RSD(R) were each less than 25%. We believe the developed method would be useful for the practical screening analysis of GM maize.

  7. Reference satellite selection method for GNSS high-precision relative positioning

    Directory of Open Access Journals (Sweden)

    Xiao Gao

    2017-03-01

    Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.

  8. Precision of a FDTD method to simulate cold magnetized plasmas

    International Nuclear Information System (INIS)

    Pavlenko, I.V.; Melnyk, D.A.; Prokaieva, A.O.; Girka, I.O.

    2014-01-01

    The finite difference time domain (FDTD) method is applied to describe the propagation of the transverse electromagnetic waves through the magnetized plasmas. The numerical dispersion relation is obtained in a cold plasma approximation. The accuracy of the numerical dispersion is calculated as a function of the frequency of the launched wave and time step of the numerical grid. It is shown that the numerical method does not reproduce the analytical results near the plasma resonances for any chosen value of time step if there is not a dissipation mechanism in the system. It means that FDTD method cannot be applied straightforward to simulate the problems where the plasma resonances play a key role (for example, the mode conversion problems). But the accuracy of the numerical scheme can be improved by introducing some artificial damping of the plasma currents. Although part of the wave power is lost in the system in this case but the numerical scheme describes the wave processes in an agreement with analytical predictions.

  9. Method of forming capsules containing a precise amount of material

    Science.gov (United States)

    Grossman, M.W.; George, W.A.; Maya, J.

    1986-06-24

    A method of forming a sealed capsule containing a submilligram quantity of mercury or the like, the capsule being constructed from a hollow glass tube, by placing a globule or droplet of the mercury in the tube. The tube is then evacuated and sealed and is subsequently heated so as to vaporize the mercury and fill the tube therewith. The tube is then separated into separate sealed capsules by heating spaced locations along the tube with a coiled heating wire means to cause collapse spaced locations there along and thus enable separation of the tube into said capsules. 7 figs.

  10. Precise magnetostatic field using the finite element method

    International Nuclear Information System (INIS)

    Nascimento, Francisco Rogerio Teixeira do

    2013-01-01

    The main objective of this work is to simulate electromagnetic fields using the Finite Element Method. Even in the easiest case of electrostatic and magnetostatic numerical simulation some problems appear when the nodal finite element is used. It is difficult to model vector fields with scalar functions mainly in non-homogeneous materials. With the aim to solve these problems two types of techniques are tried: the adaptive remeshing using nodal elements and the edge finite element that ensure the continuity of tangential components. Some numerical analysis of simple electromagnetic problems with homogeneous and non-homogeneous materials are performed using first, the adaptive remeshing based in various error indicators and second, the numerical solution of waveguides using edge finite element. (author)

  11. Digital Integration Method (DIM): A new method for the precise correlation of OCT and fluorescein angiography

    International Nuclear Information System (INIS)

    Hassenstein, A.; Richard, G.; Inhoffen, W.; Scholz, F.

    2007-01-01

    The new integration method (DIM) provides for the first time the anatomically precise integration of the OCT-scan position into the angiogram (fluorescein angiography, FLA), using reference marker at corresponding vessel crossings. Therefore an exact correlation of angiographic and morphological pathological findings is possible und leads to a better understanding of OCT and FLA. Occult findings in FLA were the patient group which profited most. Occult leakages could gain additional information using DIM such as serous detachment of the retinal pigment epithelium (RPE) in a topography. So far it was unclear whether the same localization in the lesion was examined by FLA and OCT especially when different staff were performing and interpreting the examination. Using DIM this problem could be solved using objective markers. This technique is the requirement for follow-up examinations by OCT. Using DIM for an objective, reliable and precise correlation of OCT and FLA-findings it is now possible to provide the identical scan-position in follow-up. Therefore for follow-up in clinical studies it is mandatory to use DIM to improve the evidence-based statement of OCT and the quality of the study. (author) [de

  12. Method of high precision interval measurement in pulse laser ranging system

    Science.gov (United States)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  13. A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.

    Science.gov (United States)

    Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang

    2017-06-28

    Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.

  14. Improving the precision of the keyword-matching pornographic text filtering method using a hybrid model.

    Science.gov (United States)

    Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong

    2004-09-01

    With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.

  15. Accuracy, precision, and economic efficiency for three methods of thrips (Thysanoptera: Thripidae) population density assessment.

    Science.gov (United States)

    Sutherland, Andrew M; Parrella, Michael P

    2011-08-01

    Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.

  16. Precision and Accuracy of k0-NAA Method for Analysis of Multi Elements in Reference Samples

    International Nuclear Information System (INIS)

    Sri-Wardani

    2004-01-01

    Accuracy and precision of k 0 -NAA method could determine in the analysis of multi elements contained in reference samples. The analyzed results of multi elements in SRM 1633b sample were obtained with optimum results in bias of 20% but it is in a good accuracy and precision. The analyzed results of As, Cd and Zn in CCQM-P29 rice flour sample were obtained with very good result in bias of 0.5 - 5.6%. (author)

  17. A High-precision Motion Compensation Method for SAR Based on Image Intensity Optimization

    Directory of Open Access Journals (Sweden)

    Hu Ke-bin

    2015-02-01

    Full Text Available Owing to the platform instability and precision limitations of motion sensors, motion errors negatively affect the quality of synthetic aperture radar (SAR images. The autofocus Back Projection (BP algorithm based on the optimization of image sharpness compensates for motion errors through phase error estimation. This method can attain relatively good performance, while assuming the same phase error for all pixels, i.e., it ignores the spatial variance of motion errors. To overcome this drawback, a high-precision motion error compensation method is presented in this study. In the proposed method, the Antenna Phase Centers (APC are estimated via optimization using the criterion of maximum image intensity. Then, the estimated APCs are applied for BP imaging. Because the APC estimation equals the range history estimation for each pixel, high-precision phase compensation for every pixel can be achieved. Point-target simulations and processing of experimental data validate the effectiveness of the proposed method.

  18. Evaluation of Different Estimation Methods for Accuracy and Precision in Biological Assay Validation.

    Science.gov (United States)

    Yu, Binbing; Yang, Harry

    2017-01-01

    Biological assays ( bioassays ) are procedures to estimate the potency of a substance by studying its effects on living organisms, tissues, and cells. Bioassays are essential tools for gaining insight into biologic systems and processes including, for example, the development of new drugs and monitoring environmental pollutants. Two of the most important parameters of bioassay performance are relative accuracy (bias) and precision. Although general strategies and formulas are provided in USP, a comprehensive understanding of the definitions of bias and precision remain elusive. Additionally, whether there is a beneficial use of data transformation in estimating intermediate precision remains unclear. Finally, there are various statistical estimation methods available that often pose a dilemma for the analyst who must choose the most appropriate method. To address these issues, we provide both a rigorous definition of bias and precision as well as three alternative methods for calculating relative standard deviation (RSD). All methods perform similarly when the RSD ≤10%. However, the USP estimates result in larger bias and root-mean-square error (RMSE) compared to the three proposed methods when the actual variation was large. Therefore, the USP method should not be used for routine analysis. For data with moderate skewness and deviation from normality, the estimates based on the original scale perform well. The original scale method is preferred, and the method based on log-transformation may be used for noticeably skewed data. LAY ABSTRACT: Biological assays, or bioassays, are essential in the development and manufacture of biopharmaceutical products for potency testing and quality monitoring. Two important parameters of assay performance are relative accuracy (bias) and precision. The definitions of bias and precision in USP 〈1033〉 are elusive and confusing. Another complicating issue is whether log-transformation should be used for calculating the

  19. Precision evaluation of pressed pastille preparation different methods for X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Lima, Raquel Franco de Souza; Melo Junior, Germano; Sa, Jaziel Martins

    1997-01-01

    This work relates the comparison between the results obtained with the two different methods of preparing pressed pastilles from the crushed sample. In this study, the reproductivity is evaluated, aiming to define the method that furnishes a better analytic precision. These analyses were realized with a X-ray fluorescence spectrometer at the Geology Department of the Federal University of Rio Grande do Norte

  20. Improvement of precision method of spectrophotometry with inner standardization and its use in plutonium solutions analysis

    International Nuclear Information System (INIS)

    Stepanov, A.V.; Stepanov, D.A.; Nikitina, S.A.; Gogoleva, T.D.; Grigor'eva, M.G.; Bulyanitsa, L.S.; Panteleev, Yu.A.; Pevtsova, E.V.; Domkin, V.D.; Pen'kin, M.V.

    2006-01-01

    Precision method of spectrophotometry with inner standardization is used for analysis of pure Pu solutions. Improvement of the spectrophotometer and spectrophotometric method of analysis is done to decrease accidental constituent of relative error of the method. Influence of U, Np impurities and corrosion products on systematic constituent of error of the method, and effect of fluoride-ion on completeness of Pu oxidation in sample preparation are studied [ru

  1. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    Science.gov (United States)

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  2. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    OpenAIRE

    Gunawan, Hendra; Micheldiament, Micheldiament; Mikhailov, Valentin

    2008-01-01

    http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density) estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting ...

  3. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    Science.gov (United States)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  4. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    Directory of Open Access Journals (Sweden)

    Tabitha A Graves

    Full Text Available Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic and bear rubs (opportunistic. We used hierarchical abundance models (N-mixture models with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1 lead to the selection of the same variables as important and (2 provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3 yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight, and (4 improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed

  5. Fast and precise method of contingency ranking in modern power system

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2011-01-01

    Contingency Analysis is one of the most important aspect of Power System Security Analysis. This paper presents a fast and precise method of contingency ranking for effective power system security analysis. The method proposed in this research work takes due consideration of both apparent power o...... is based on realistic approach taking practical situations into account. Besides taking real situations into consideration the proposed method is fast enough to be considered for on-line security analysis.......Contingency Analysis is one of the most important aspect of Power System Security Analysis. This paper presents a fast and precise method of contingency ranking for effective power system security analysis. The method proposed in this research work takes due consideration of both apparent power...

  6. A High Precision Laser-Based Autofocus Method Using Biased Image Plane for Microscopy

    Directory of Open Access Journals (Sweden)

    Chao-Chen Gu

    2018-01-01

    Full Text Available This study designs and accomplishes a high precision and robust laser-based autofocusing system, in which a biased image plane is applied. In accordance to the designed optics, a cluster-based circle fitting algorithm is proposed to calculate the radius of the detecting spot from the reflected laser beam as an essential factor to obtain the defocus value. The experiment conduct on the experiment device achieved novel performance of high precision and robustness. Furthermore, the low demand of assembly accuracy makes the proposed method a low-cost and realizable solution for autofocusing technique.

  7. Improvement in precision and trueness of quantitative XRF analysis with glass-bead method. 1

    International Nuclear Information System (INIS)

    Yamamoto, Yasuyuki; Ogasawara, Noriko; Yuhara, Yoshitaroh; Yokoyama, Yuichi

    1995-01-01

    The factors which lower the precisions of simultaneous X-ray Fluorescence (XRF) spectrometer were investigated. Especially in quantitative analyses of oxide powders with glass-bead method, X-ray optical characteristics of the equipment affects the precision of the X-ray intensities. In focused (curved) crystal spectrometers, the precision depends on the deviation of the actual size and position of the crystals from those of theoretical designs, thus the precision differs for each crystal for each element. When the deviation is large, a dispersion of the measured X-ray intensities is larger than the statistical dispersion, even though the intensity itself keeps unchanged. Moreover, a waviness of the surface of glass-beads makes the difference of the height of an analyzed surface from that of the designed one. This difference makes the change of the amount of the X-ray incident on the analyzing crystal and makes the dispersion of the X-ray intensity larger. Considering these factors, a level of the waviness must be regulated to improve the precision under exsisting XRF equipments. In this study, measurement precisions of 4 simultaneous XRF spectrometers were evaluated, and the element lead (Pb-Lβ1) was found to have the lowest precision. Relative standard deviation (RSD) of the measurements of 10 glass-beads for the same powder sample was 0.3% without the regulation of the waviness of analytical surface. With mechanical flattening of the glass-bead surface, the level of waviness, which is the maximum difference of the heights in a glass-bead, was regulated as under 30 μm, RSD was 0.038%, which is almost comparable to the statistical RSD 0.033%. (author)

  8. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients.

    Science.gov (United States)

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients' body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (Ptemperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient's body temperature in the intensive care units because of high accuracy and acceptable precision.

  9. Setup for precise measurement of neutro lifetime by UCN storage method with inelastically scattered neutron detection

    International Nuclear Information System (INIS)

    Arzumanov, S.S; Bondarenko, L.N.; Gel'tenbort, P.; Morozov, V.I.; Nesvizhevskij, V.V.; Panin, Yu.N.; Strepetov, A.N.

    2007-01-01

    The experimental setup and the method of measuring the neutron lifetime with a precision less then 1 s is described. The measurements will be carried out by storage of ultracold neutrons (UCN) into vessels with inner walls coated with fluorine polymer oil with simultaneous registration of inelastically scattered UCN leaving storage vessels. The analysis of statistical and methodical errors is carried out. The calculated estimation of the measurement accuracy is presented [ru

  10. Precision of a new bedside method for estimation of the circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Eriksen, B; Henneberg, S W

    1993-01-01

    The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before and a......, determination of CBV can be performed with an amount of CO that gives rise to a harmless increase in the carboxyhemoglobin concentration.(ABSTRACT TRUNCATED AT 250 WORDS)...

  11. In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions.

    Science.gov (United States)

    Ender, Andreas; Attin, Thomas; Mehl, Albert

    2016-03-01

    Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; digitized scannable vinylsiloxanether, VSES-D; and irreversible hydrocolloid, ALG) and 7 digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; Lava COS, LAV; Lava True Definition Scanner, T-Def; 3Shape Trios, TRI; and 3Shape Trios Color, TRC) techniques. Impressions were made 3 times each in 5 participants (N=15). The impressions were then compared within and between the test groups. The cast surfaces were measured point-to-point using the signed nearest neighbor method. Precision was calculated from the (90%-10%)/2 percentile value. The precision ranged from 12.3 μm (VSE) to 167.2 μm (ALG), with the highest precision in the VSE and VSES groups. The deviation pattern varied distinctly according to the impression method. Conventional impressions showed the highest accuracy across the complete dental arch in all groups, except for the ALG group. Conventional and digital impression methods differ significantly in the complete-arch accuracy. Digital impression systems had higher local deviations within the complete arch cast; however, they achieve equal and higher precision than some conventional impression materials. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  12. Validation and Recommendation of Methods to Measure Biogas Production Potential of Animal Manure

    Directory of Open Access Journals (Sweden)

    C. H. Pham

    2013-06-01

    Full Text Available In developing countries, biogas energy production is seen as a technology that can provide clean energy in poor regions and reduce pollution caused by animal manure. Laboratories in these countries have little access to advanced gas measuring equipment, which may limit research aimed at improving local adapted biogas production. They may also be unable to produce valid estimates of an international standard that can be used for articles published in international peer-reviewed science journals. This study tested and validated methods for measuring total biogas and methane (CH4 production using batch fermentation and for characterizing the biomass. The biochemical methane potential (BMP (CH4 NL kg−1 VS of pig manure, cow manure and cellulose determined with the Moller and VDI methods was not significantly different in this test (p>0.05. The biodegradability using a ratio of BMP and theoretical BMP (TBMP was slightly higher using the Hansen method, but differences were not significant. Degradation rate assessed by methane formation rate showed wide variation within the batch method tested. The first-order kinetics constant k for the cumulative methane production curve was highest when two animal manures were fermented using the VDI 4630 method, indicating that this method was able to reach steady conditions in a shorter time, reducing fermentation duration. In precision tests, the repeatability of the relative standard deviation (RSDr for all batch methods was very low (4.8 to 8.1%, while the reproducibility of the relative standard deviation (RSDR varied widely, from 7.3 to 19.8%. In determination of biomethane concentration, the values obtained using the liquid replacement method (LRM were comparable to those obtained using gas chromatography (GC. This indicates that the LRM method could be used to determine biomethane concentration in biogas in laboratories with limited access to GC.

  13. A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.

    Science.gov (United States)

    Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin

    2016-05-25

    Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.

  14. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.

    2011-01-01

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  15. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.

    Science.gov (United States)

    Ender, Andreas; Mehl, Albert

    2013-02-01

    A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (PDigital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (Pdigital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  16. In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.

    Science.gov (United States)

    Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert

    2016-09-01

    Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of

  17. Precise positioning method for multi-process connecting based on binocular vision

    Science.gov (United States)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  18. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  19. Sternal instability measured with radiostereometric analysis. A study of method feasibility, accuracy and precision.

    Science.gov (United States)

    Vestergaard, Rikke Falsig; Søballe, Kjeld; Hasenkam, John Michael; Stilling, Maiken

    2018-05-18

    A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. Four bone analogs (phantoms) were sternotomized and tantalum beads were inserted in each half. The models were reunited with wire cerclage and placed in a radiolucent separation device. Stereoradiographs (n = 48) of the phantoms in 3 positions were recorded at 4 imposed separation points. The accuracy and precision was compared statistically and presented as translations along the 3 orthogonal axes. 7 sternotomized patients were evaluated for clinical RSA precision by double-examination stereoradiographs (n = 28). In the phantom study, we found no systematic error (p > 0.3) between the three phantom positions, and precision for evaluation of sternal separation was 0.02 mm. Phantom accuracy was mean 0.13 mm (SD 0.25). In the clinical study, we found a detection limit of 0.42 mm for sternal separation and of 2 mm for anterior-posterior dislocation of the sternal halves for the individual patient. RSA is a precise and low-dose image modality feasible for clinical evaluation of sternal stability in research. ClinicalTrials.gov Identifier: NCT02738437 , retrospectively registered.

  20. In vivo precision of conventional and digital methods for obtaining quadrant dental impressions

    OpenAIRE

    Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert

    2016-01-01

    OBJECTIVES Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. MATERIALS AND METHODS Impressions were obtained via two conventional (metal full-arch tray, CI, ...

  1. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision

    OpenAIRE

    Ender, Andreas; Mehl, Albert

    2013-01-01

    STATEMENT OF PROBLEM: A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. PURPOSE: The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. MATERIAL AND METHODS: A steel reference dentate model was fabricated and measured with a...

  2. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    Directory of Open Access Journals (Sweden)

    Asadian S

    2016-09-01

    Full Text Available Simin Asadian,1 Alireza Khatony,1 Gholamreza Moradi,2 Alireza Abdi,1 Mansour Rezaei,3 1Nursing and Midwifery School, Kermanshah University of Medical Sciences, 2Department of Anesthesiology, 3Biostatistics & Epidemiology Department, Kermanshah University of Medical Sciences, Kermanshah, Iran Introduction: An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods: In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results: There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001. Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%. Paired t-test demonstrated an acceptable precision with forehead (P=0.132, left (P=0.18 and right (P=0.318 tympanic membranes, oral (P=1.00, and axillary (P=1.00 methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion: The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left for

  3. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  4. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    Science.gov (United States)

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  5. Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.

    Science.gov (United States)

    Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro

    2010-01-01

    The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.

  6. Precision Radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework.

    Science.gov (United States)

    Oakden-Rayner, Luke; Carneiro, Gustavo; Bessen, Taryn; Nascimento, Jacinto C; Bradley, Andrew P; Palmer, Lyle J

    2017-05-10

    Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an individual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with individual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall individual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous 'manual' clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research - mortality, and that deep learning with convolutional neural networks can be usefully applied to radiomics research. Computer image analysis applied to routinely collected medical images offers substantial potential to enhance precision medicine initiatives.

  7. Precision Distances with the Tip of the Red Giant Branch Method

    Science.gov (United States)

    Beaton, Rachael Lynn; Carnegie-Chicago Hubble Program Team

    2018-01-01

    The Carnegie-Chicago Hubble Program aims to construct a distance ladder that utilizes old stellar populations in the outskirts of galaxies to produce a high precision measurement of the Hubble Constant that is independent of Cepheids. The CCHP uses the tip of the red giant branch (TRGB) method, which is a statistical measurement technique that utilizes the termination of the red giant branch. Two innovations combine to make the TRGB a competitive route to the Hubble Constant (i) the large-scale measurement of trigonometric parallax by the Gaia mission and (ii) the development of both precise and accurate means of determining the TRGB in both nearby (~1 Mpc) and distant (~20 Mpc) galaxies. Here I will summarize our progress in developing these standardized techniques, focusing on both our edge-detection algorithm and our field selection strategy. Using these methods, the CCHP has determined equally precise (~2%) distances to galaxies in the Local Group (< 1 Mpc) and across the Local Volume (< 20 Mpc). The TRGB is, thus, an incredibly powerful and straightforward means to determine distances to galaxies of any Hubble Type and, thus, has enormous potential for putting any number of astrophyiscal phenomena on absolute units.

  8. Accuracy and Precision of a Plane Wave Vector Flow Imaging Method in the Healthy Carotid Artery

    DEFF Research Database (Denmark)

    Jensen, Jonas; Villagómez Hoyos, Carlos Armando; Traberg, Marie Sand

    2018-01-01

    The objective of the study described here was to investigate the accuracy and precision of a plane wave 2-D vector flow imaging (VFI) method in laminar and complex blood flow conditions in the healthy carotid artery. The approach was to study (i) the accuracy for complex flow by comparing...... of laminar flow in vivo. The precision in vivo was calculated as the mean standard deviation (SD) of estimates aligned to the heart cycle and was highest in the center of the common carotid artery (SD = 3.6% for velocity magnitudes and 4.5° for angles) and lowest in the external branch and for vortices (SD...... the velocity field from a computational fluid dynamics (CFD) simulation to VFI estimates obtained from the scan of an anthropomorphic flow phantom and from an in vivo scan; (ii) the accuracy for laminar unidirectional flow in vivo by comparing peak systolic velocities from VFI with magnetic resonance...

  9. A high precision method for quantitative measurements of reactive oxygen species in frozen biopsies.

    Directory of Open Access Journals (Sweden)

    Kirsti Berg

    Full Text Available OBJECTIVE: An electron paramagnetic resonance (EPR technique using the spin probe cyclic hydroxylamine 1-hydroxy-3-methoxycarbonyl-2,2,5,5-tetramethylpyrrolidine (CMH was introduced as a versatile method for high precision quantification of reactive oxygen species, including the superoxide radical in frozen biological samples such as cell suspensions, blood or biopsies. MATERIALS AND METHODS: Loss of measurement precision and accuracy due to variations in sample size and shape were minimized by assembling the sample in a well-defined volume. Measurement was carried out at low temperature (150 K using a nitrogen flow Dewar. The signal intensity was measured from the EPR 1st derivative amplitude, and related to a sample, 3-carboxy-proxyl (CP• with known spin concentration. RESULTS: The absolute spin concentration could be quantified with a precision and accuracy better than ±10 µM (k = 1. The spin concentration of samples stored at -80°C could be reproduced after 6 months of storage well within the same error estimate. CONCLUSION: The absolute spin concentration in wet biological samples such as biopsies, water solutions and cell cultures could be quantified with higher precision and accuracy than normally achievable using common techniques such as flat cells, tissue cells and various capillary tubes. In addition; biological samples could be collected and stored for future incubation with spin probe, and also further stored up to at least six months before EPR analysis, without loss of signal intensity. This opens for the possibility to store and transport incubated biological samples with known accuracy of the spin concentration over time.

  10. [Accuracy, precision and speed of parenteral nutrition admixture bags manufacturing: comparison between automated and manual methods].

    Science.gov (United States)

    Zegbeh, H; Pirot, F; Quessada, T; Durand, T; Vételé, F; Rose, A; Bréant, V; Aulagner, G

    2011-01-01

    The parenteral nutrition admixture (PNA) manufacturing in hospital pharmacy is realized by aseptic transfer (AT) or sterilizing filtration (SF). The development of filling systems for PNA manufacturing requires, without standard, an evaluation comparing to traditional methods of SF. The filling accuracy of automated AT and SF was evaluated by mass and physical-chemistry tests in repeatability conditions (identical composition of PNA; n=five bags) and reproducibility conditions (different composition of PNA; n=57 bags). For each manufacturing method, the filling precision and the average time for PNA bags manufacturing were evaluated starting from an identical composition and volume PNA (n=five trials). Both manufacturing methods did not show significant difference of accuracy. Precision of both methods was lower than limits generally admitted for acceptability of mass and physical-chemistry tests. However, the manufacturing time for SF was superior (five different binary admixtures in five bags) or inferior (one identical binary admixture in five bags) to time recorded for automated AT. We show that serial manufacturing of PNA bags by SF with identical composition is faster than automated AT. Nevertheless, automated AT is faster than SF in variable composition of PNA. The manufacturing method choice will be motivate by the nature (i. e., variable composition or not) of the manufactured bags. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  11. Using cold deformation methods in flow-production of steel high precision shaped sections

    International Nuclear Information System (INIS)

    Zajtsev, M.L.; Makhnev, I.F.; Shkurko, I.I.

    1975-01-01

    A final size with a preset tolerance and a required surface finish of steel high-precision sections could be achieved by a cold deformation of hot-rolled ingots-by drawing through dismountable, monolith or roller-type drawing tools or by cold rolling in roller dies. The particularities of the both techniques are compared as regards a number of complicated shaped sections and the advantages of cold rolling are showna more uniform distribution of deformations (strain hardening) across the section, that is a greater margin of plasticity with the same reductions, the less number of the operations required. Rolling is recommended in all the cases when possible as regards the section shape and the bulk volume. The rolling-mill for the calibration of high-precision sections should have no less than two shafts (so that the size could be controlled in both directions) and arrangements to withstand high axial stresses on the rollers (the stresses appearing during rolling in skew dies). When manufacturing precise shaped sections by the cold rolling method the operations are less plentiful than in the cold drawing manufacturing

  12. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  13. A method of precise profile analysis of diffuse scattering for the KENS pulsed neutrons

    International Nuclear Information System (INIS)

    Todate, Y.; Fukumura, T.; Fukazawa, H.

    2001-01-01

    An outline of our profile analysis method, which is now of practical use for the asymmetric KENS pulsed thermal neutrons, are presented. The analysis of the diffuse scattering from a single crystal of D 2 O is shown as an example. The pulse shape function is based on the Ikeda-Carpenter function adjusted for the KENS neutron pulses. The convoluted intensity is calculated by a Monte-Carlo method and the precision of the calculation is controlled. Fitting parameters in the model cross section can be determined by the built-in nonlinear least square fitting procedure. Because this method is the natural extension of the procedure conventionally used for the triple-axis data, it is easy to apply with generality and versatility. Most importantly, furthermore, this method has capability of precise correction of the time shift of the observed peak position which is inevitably caused in the case of highly asymmetric pulses and broad scattering function. It will be pointed out that the accurate determination of true time-of-flight is important especially in the single crystal inelastic experiments. (author)

  14. A rapid and specific titrimetric method for the precise determination of plutonium using redox indicator

    International Nuclear Information System (INIS)

    Chitnis, R.T.; Dubey, S.C.

    1976-01-01

    A simple and rapid method for the determination of plutonium in plutonium nitrate solution and its application to the purex process solutions is discussed. The method involves the oxidation of plutonium to Pu(VI) with the help of argentic oxide followed by the destruction of the excess argentic oxide by means of sulphamic acid. The determination of plutonium is completed by adding ferrous ammonium sulphate solution which reduces Pu(VI) to Pu(IV) and titrating the excess ferrous with standard potassium dichromate solution using sodium diphenylamine sulphonate as the internal indicator. The effect of the various reagents add during the oxidation and reduction of plutonium, on the final titration has been investigated. The method works satisfactorily for the analysis of plutonium in the range of 0.5 to 5 mg. The precision of the method is found to be within 0.1%. (author)

  15. Precision of a new bedside method for estimation of the circulating blood volume

    DEFF Research Database (Denmark)

    Christensen, P; Eriksen, B; Henneberg, S W

    1993-01-01

    The present study is a theoretical and experimental evaluation of a modification of the carbon monoxide method for estimation of the circulating blood volume (CBV) with respect to the precision of the method. The CBV was determined from measurements of the CO-saturation of hemoglobin before...... ventilation with the CO gas mixture. The amount of CO administered during each determination of CBV resulted in an increase in the CO saturation of hemoglobin of 2.1%-3.9%. A theoretical noise propagation analysis was performed by means of the Monte Carlo method. The analysis showed that a CO dose...... patients. The coefficients of variation were 6.2% and 4.7% in healthy and diseased subjects, respectively. Furthermore, the day-to-day variation of the method with respect to the total amount of circulating hemoglobin (nHb) and CBV was determined from duplicate estimates separated by 24-48 h. In conclusion...

  16. High-precision terahertz frequency modulated continuous wave imaging method using continuous wavelet transform

    Science.gov (United States)

    Zhou, Yu; Wang, Tianyi; Dai, Bing; Li, Wenjun; Wang, Wei; You, Chengwu; Wang, Kejia; Liu, Jinsong; Wang, Shenglie; Yang, Zhengang

    2018-02-01

    Inspired by the extensive application of terahertz (THz) imaging technologies in the field of aerospace, we exploit a THz frequency modulated continuous-wave imaging method with continuous wavelet transform (CWT) algorithm to detect a multilayer heat shield made of special materials. This method uses the frequency modulation continuous-wave system to catch the reflected THz signal and then process the image data by the CWT with different basis functions. By calculating the sizes of the defects area in the final images and then comparing the results with real samples, a practical high-precision THz imaging method is demonstrated. Our method can be an effective tool for the THz nondestructive testing of composites, drugs, and some cultural heritages.

  17. Methods for estimating disease transmission rates: Evaluating the precision of Poisson regression and two novel methods

    DEFF Research Database (Denmark)

    Kirkeby, Carsten Thure; Hisham Beshara Halasa, Tariq; Gussmann, Maya Katrin

    2017-01-01

    the transmission rate. We use data from the two simulation models and vary the sampling intervals and the size of the population sampled. We devise two new methods to determine transmission rate, and compare these to the frequently used Poisson regression method in both epidemic and endemic situations. For most...... tested scenarios these new methods perform similar or better than Poisson regression, especially in the case of long sampling intervals. We conclude that transmission rate estimates are easily biased, which is important to take into account when using these rates in simulation models....

  18. On Error Estimation in the Conjugate Gradient Method and why it Works in Finite Precision Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2002-01-01

    Roč. 13, - (2002), s. 56-80 ISSN 1068-9613 R&D Projects: GA ČR GA201/02/0595 Institutional research plan: AV0Z1030915 Keywords : conjugate gradient method * Gauss kvadrature * evaluation of convergence * error bounds * finite precision arithmetic * rounding errors * loss of orthogonality Subject RIV: BA - General Mathematics Impact factor: 0.565, year: 2002 http://etna.mcs.kent.edu/volumes/2001-2010/vol13/abstract.php?vol=13&pages=56-80

  19. Research on a high-precision calibration method for tunable lasers

    Science.gov (United States)

    Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai

    2018-03-01

    Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.

  20. Development and evaluation of event-specific quantitative PCR method for genetically modified soybean A2704-12.

    Science.gov (United States)

    Takabatake, Reona; Akiyama, Hiroshi; Sakata, Kozue; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Teshima, Reiko; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi

    2011-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event; A2704-12. During the plant transformation, DNA fragments derived from pUC19 plasmid were integrated in A2704-12, and the region was found to be A2704-12 specific. The pUC19-derived DNA sequences were used as primers for the specific detection of A2704-12. We first tried to construct a standard plasmid for A2704-12 quantification using pUC19. However, non-specific signals appeared with both qualitative and quantitative PCR analyses using the specific primers with pUC19 as a template, and we then constructed a plasmid using pBR322. The conversion factor (C(f)), which is required to calculate the amount of the genetically modified organism (GMO), was experimentally determined with two real-time PCR instruments, the Applied Biosystems 7900HT and the Applied Biosystems 7500. The determined C(f) values were both 0.98. The quantitative method was evaluated by means of blind tests in multi-laboratory trials using the two real-time PCR instruments. The limit of quantitation for the method was estimated to be 0.1%. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSD(R)), and the determined bias and RSD(R) values for the method were each less than 20%. These results suggest that the developed method would be suitable for practical analyses for the detection and quantification of A2704-12.

  1. Estimation of Bouguer Density Precision: Development of Method for Analysis of La Soufriere Volcano Gravity Data

    Directory of Open Access Journals (Sweden)

    Hendra Gunawan

    2014-06-01

    Full Text Available http://dx.doi.org/10.17014/ijog.vol3no3.20084The precision of topographic density (Bouguer density estimation by the Nettleton approach is based on a minimum correlation of Bouguer gravity anomaly and topography. The other method, the Parasnis approach, is based on a minimum correlation of Bouguer gravity anomaly and Bouguer correction. The precision of Bouguer density estimates was investigated by both methods on simple 2D syntetic models and under an assumption free-air anomaly consisting of an effect of topography, an effect of intracrustal, and an isostatic compensation. Based on simulation results, Bouguer density estimates were then investigated for a gravity survey of 2005 on La Soufriere Volcano-Guadeloupe area (Antilles Islands. The Bouguer density based on the Parasnis approach is 2.71 g/cm3 for the whole area, except the edifice area where average topography density estimates are 2.21 g/cm3 where Bouguer density estimates from previous gravity survey of 1975 are 2.67 g/cm3. The Bouguer density in La Soufriere Volcano was uncertainly estimated to be 0.1 g/cm3. For the studied area, the density deduced from refraction seismic data is coherent with the recent Bouguer density estimates. New Bouguer anomaly map based on these Bouguer density values allows to a better geological intepretation.    

  2. How precisely can the difference method determine the $\\pi$NN coupling constant?

    CERN Document Server

    Loiseau, B

    2000-01-01

    The Coulomb-like backward peak of the neutron-proton scattering differentialcross section is due to one-pion exchange. Extrapolation to the pion pole ofprecise data should allow to obtain the value of the charged pion-nucleoncoupling constant. This was classically attempted by the use of a smoothphysical function, the Chew function, built from the cross section. To improveaccuracy of such an extrapolation one has introduced a difference method. Itconsists of extrapolating the difference between the Chew function based onexperimental data and that built from a model where the pion-nucleon couplingis exactly known. Here we cross-check to which precision can work this novelextrapolation method by applying it to differences between models and betweendata and models. With good reference models and for the 162 MeV neutron-protonUppsala single energy precise data with a normalisation error of 2.3 , thevalue of the charged pion-nucleon coupling constant is obtained with anaccuracy close to 1.8

  3. Novel Methods to Enhance Precision and Reliability in Muscle Synergy Identification during Walking

    Science.gov (United States)

    Kim, Yushin; Bulea, Thomas C.; Damiano, Diane L.

    2016-01-01

    Muscle synergies are hypothesized to reflect modular control of muscle groups via descending commands sent through multiple neural pathways. Recently, the number of synergies has been reported as a functionally relevant indicator of motor control complexity in individuals with neurological movement disorders. Yet the number of synergies extracted during a given activity, e.g., gait, varies within and across studies, even for unimpaired individuals. With no standardized methods for precise determination, this variability remains unexplained making comparisons across studies and cohorts difficult. Here, we utilize k-means clustering and intra-class and between-level correlation coefficients to precisely discriminate reliable from unreliable synergies. Electromyography (EMG) was recorded bilaterally from eight leg muscles during treadmill walking at self-selected speed. Muscle synergies were extracted from 20 consecutive gait cycles using non-negative matrix factorization. We demonstrate that the number of synergies is highly dependent on the threshold when using the variance accounted for by reconstructed EMG. Beyond use of threshold, our method utilized a quantitative metric to reliably identify four or five synergies underpinning walking in unimpaired adults and revealed synergies having poor reproducibility that should not be considered as true synergies. We show that robust and unreliable synergies emerge similarly, emphasizing the need for careful analysis in those with pathology. PMID:27695403

  4. High-precision solution to the moving load problem using an improved spectral element method

    Science.gov (United States)

    Wen, Shu-Rui; Wu, Zhi-Jing; Lu, Nian-Li

    2018-02-01

    In this paper, the spectral element method (SEM) is improved to solve the moving load problem. In this method, a structure with uniform geometry and material properties is considered as a spectral element, which means that the element number and the degree of freedom can be reduced significantly. Based on the variational method and the Laplace transform theory, the spectral stiffness matrix and the equivalent nodal force of the beam-column element are established. The static Green function is employed to deduce the improved function. The proposed method is applied to two typical engineering practices—the one-span bridge and the horizontal jib of the tower crane. The results have revealed the following. First, the new method can yield extremely high-precision results of the dynamic deflection, the bending moment and the shear force in the moving load problem. In most cases, the relative errors are smaller than 1%. Second, by comparing with the finite element method, one can obtain the highly accurate results using the improved SEM with smaller element numbers. Moreover, the method can be widely used for statically determinate as well as statically indeterminate structures. Third, the dynamic deflection of the twin-lift jib decreases with the increase in the moving load speed, whereas the curvature of the deflection increases. Finally, the dynamic deflection, the bending moment and the shear force of the jib will all increase as the magnitude of the moving load increases.

  5. Sensitive method for precise measurement of endogenous angiotensins I, II and III in human plasma

    International Nuclear Information System (INIS)

    Kawamura, M.; Yoshida, K.; Akabane, S.

    1987-01-01

    We measured endogenous angiotensins (ANGs) I, IIandIII using a system of extraction by Sep-Pak column followed by high performance liquid chromatography (HPLC) combined with radioimmunoassay (RIA). An excellent separation of ANGs was obtained by HPLC. The recovery of ANGs I, IIandIII was 80-84%, when these authentic peptides were added to 6 ml of plasma. The coefficient of variation of the ANGs was 0.04-0.09 for intra-assay and 0.08-0.13 for inter-assay, thereby indicating a good reproducibility. Plasma ANGs I, IIandIII measured by this method in 5 normal volunteers were 51,4.5 and 1.2 pg/ml. In the presence of captopril, ANGs IIandIII decreased by 84% and 77%, respectively, while ANG I increased 5.1 times. This method is therefore useful to assess the precise levels of plasma ANGs

  6. Methods used by Elsam for monitoring precision and accuracy of analytical results

    Energy Technology Data Exchange (ETDEWEB)

    Hinnerskov Jensen, J [Soenderjyllands Hoejspaendingsvaerk, Faelleskemikerne, Aabenraa (Denmark)

    1996-12-01

    Performing round robins at regular intervals is the primary method used by ELsam for monitoring precision and accuracy of analytical results. The firs round robin was started in 1974, and today 5 round robins are running. These are focused on: boiler water and steam, lubricating oils, coal, ion chromatography and dissolved gases in transformer oils. Besides the power plant laboratories in Elsam, the participants are power plant laboratories from the rest of Denmark, industrial and commercial laboratories in Denmark, and finally foreign laboratories. The calculated standard deviations or reproducibilities are compared with acceptable values. These values originate from ISO, ASTM and the like, or from own experiences. Besides providing the laboratories with a tool to check their momentary performance, the round robins are vary suitable for evaluating systematic developments on a long term basis. By splitting up the uncertainty according to methods, sample preparation/analysis, etc., knowledge can be extracted from the round robins for use in many other situations. (au)

  7. An Image Analysis Method for the Precise Selection and Quantitation of Fluorescently Labeled Cellular Constituents

    Science.gov (United States)

    Agley, Chibeza C.; Velloso, Cristiana P.; Lazarus, Norman R.

    2012-01-01

    The accurate measurement of the morphological characteristics of cells with nonuniform conformations presents difficulties. We report here a straightforward method using immunofluorescent staining and the commercially available imaging program Adobe Photoshop, which allows objective and precise information to be gathered on irregularly shaped cells. We have applied this measurement technique to the analysis of human muscle cells and their immunologically marked intracellular constituents, as these cells are prone to adopting a highly branched phenotype in culture. Use of this method can be used to overcome many of the long-standing limitations of conventional approaches for quantifying muscle cell size in vitro. In addition, wider applications of Photoshop as a quantitative and semiquantitative tool in immunocytochemistry are explored. PMID:22511600

  8. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    Science.gov (United States)

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.

  9. The Ramsey method in high-precision mass spectrometry with Penning traps Experimental results

    CERN Document Server

    George, S; Herfurth, F; Herlert, A; Kretzschmar, M; Nagy, S; Schwarz, S; Schweikhard, L; Yazidjian, C

    2007-01-01

    The highest precision in direct mass measurements is obtained with Penning trap mass spectrometry. Most experiments use the interconversion of the magnetron and cyclotron motional modes of the stored ion due to excitation by external radiofrequency-quadrupole fields. In this work a new excitation scheme, Ramsey's method of time-separated oscillatory fields, has been successfully tested. It has been shown to reduce significantly the uncertainty in the determination of the cyclotron frequency and thus of the ion mass of interest. The theoretical description of the ion motion excited with Ramsey's method in a Penning trap and subsequently the calculation of the resonance line shapes for different excitation times, pulse structures, and detunings of the quadrupole field has been carried out in a quantum mechanical framework and is discussed in detail in the preceding article in this journal by M. Kretzschmar. Here, the new excitation technique has been applied with the ISOLTRAP mass spectrometer at ISOLDE/CERN fo...

  10. A High Precision Comprehensive Evaluation Method for Flood Disaster Loss Based on Improved Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yuliang; LU Guihua; JIN Juliang; TONG Fang; ZHOU Ping

    2006-01-01

    Precise comprehensive evaluation of flood disaster loss is significant for the prevention and mitigation of flood disasters. Here, one of the difficulties involved is how to establish a model capable of describing the complex relation between the input and output data of the system of flood disaster loss. Genetic programming (GP) solves problems by using ideas from genetic algorithm and generates computer programs automatically. In this study a new method named the evaluation of the grade of flood disaster loss (EGFD) on the basis of improved genetic programming (IGP) is presented (IGPEGFD). The flood disaster area and the direct economic loss are taken as the evaluation indexes of flood disaster loss. Obviously that the larger the evaluation index value, the larger the corresponding value of the grade of flood disaster loss is. Consequently the IGP code is designed to make the value of the grade of flood disaster be an increasing function of the index value. The result of the application of the IGP-EGFD model to Henan Province shows that a good function expression can be obtained within a bigger searched function space; and the model is of high precision and considerable practical significance.Thus, IGP-EGFD can be widely used in automatic modeling and other evaluation systems.

  11. Soil chemical sensor and precision agricultural chemical delivery system and method

    Science.gov (United States)

    Colburn, Jr., John W.

    1991-01-01

    A real time soil chemical sensor and precision agricultural chemical delivery system includes a plurality of ground-engaging tools in association with individual soil sensors which measure soil chemical levels. The system includes the addition of a solvent which rapidly saturates the soil/tool interface to form a conductive solution of chemicals leached from the soil. A multivalent electrode, positioned within a multivalent frame of the ground-engaging tool, applies a voltage or impresses a current between the electrode and the tool frame. A real-time soil chemical sensor and controller senses the electrochemical reaction resulting from the application of the voltage or current to the leachate, measures it by resistivity methods, and compares it against pre-set resistivity levels for substances leached by the solvent. Still greater precision is obtained by calibrating for the secondary current impressed through solvent-less soil. The appropriate concentration is then found and the servo-controlled delivery system applies the appropriate amount of fertilizer or agricultural chemicals substantially in the location from which the soil measurement was taken.

  12. A Method for Precision Closed-Loop Irrigation Using a Modified PID Control Algorithm

    Science.gov (United States)

    Goodchild, Martin; Kühn, Karl; Jenkins, Malcolm; Burek, Kazimierz; Dutton, Andrew

    2016-04-01

    The benefits of closed-loop irrigation control have been demonstrated in grower trials which show the potential for improved crop yields and resource usage. Managing water use by controlling irrigation in response to soil moisture changes to meet crop water demands is a popular approach but requires knowledge of closed-loop control practice. In theory, to obtain precise closed-loop control of a system it is necessary to characterise every component in the control loop to derive the appropriate controller parameters, i.e. proportional, integral & derivative (PID) parameters in a classic PID controller. In practice this is often difficult to achieve. Empirical methods are employed to estimate the PID parameters by observing how the system performs under open-loop conditions. In this paper we present a modified PID controller, with a constrained integral function, that delivers excellent regulation of soil moisture by supplying the appropriate amount of water to meet the needs of the plant during the diurnal cycle. Furthermore, the modified PID controller responds quickly to changes in environmental conditions, including rainfall events which can result in: controller windup, under-watering and plant stress conditions. The experimental work successfully demonstrates the functionality of a constrained integral PID controller that delivers robust and precise irrigation control. Coir substrate strawberry growing trial data is also presented illustrating soil moisture control and the ability to match water deliver to solar radiation.

  13. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  14. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    Science.gov (United States)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to

  15. Study of the nanoporous CHAP photoluminiscence for developing the precise methods of early caries detection

    Science.gov (United States)

    Goloshchapov, D.; Seredin, P.; Minakov, D.; Domashevskaya, E.

    2018-02-01

    This paper deals with the luminescence characteristics of an analogue of the mineral component of dental enamel of the nanocrystalline B-type carbonate-substituted hydroxyapatite (CHAP) with 3D defects (i.e. nanopores of ∼2-5 nm) on the nanocrystalline surface. The laser-induced luminescence of the synthesized CHAP samples was in the range of ∼515 nm (∼2.4 eV) and is due to CO3 groups replacing the PO4 group. It was found that the intensity of the luminescence of the CHAP is caused by structurally incorporated CO3 groups in the HAP structure. Furthermore, the intensity of the luminescence also decreases as the number of the above intracentre defects (CO3) in the apatite structure declines. These results are potentially promising for developing the foundations for precise methods for the early detection of caries in human solid dental tissue.

  16. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    International Nuclear Information System (INIS)

    Gleisberg, Tanju

    2008-01-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  17. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Gleisberg, Tanju

    2008-07-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  18. Research for developing precise tsunami evaluation methods. Probabilistic tsunami hazard analysis/numerical simulation method with dispersion and wave breaking

    International Nuclear Information System (INIS)

    2007-01-01

    The present report introduces main results of investigations on precise tsunami evaluation methods, which were carried out from the viewpoint of safety evaluation for nuclear power facilities and deliberated by the Tsunami Evaluation Subcommittee. A framework for the probabilistic tsunami hazard analysis (PTHA) based on logic tree is proposed and calculation on the Pacific side of northeastern Japan is performed as a case study. Tsunami motions with dispersion and wave breaking were investigated both experimentally and numerically. The numerical simulation method is verified for its practicability by applying to a historical tsunami. Tsunami force is also investigated and formulae of tsunami pressure acting on breakwaters and on building due to inundating tsunami are proposed. (author)

  19. Methods for semi-automated indexing for high precision information retrieval

    Science.gov (United States)

    Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.

    2002-01-01

    OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.

  20. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  1. Approximate Methods for the Generation of Dark Matter Halo Catalogs in the Age of Precision Cosmology

    Directory of Open Access Journals (Sweden)

    Pierluigi Monaco

    2016-10-01

    Full Text Available Precision cosmology has recently triggered new attention on the topic of approximate methods for the clustering of matter on large scales, whose foundations date back to the period from the late 1960s to early 1990s. Indeed, although the prospect of reaching sub-percent accuracy in the measurement of clustering poses a challenge even to full N-body simulations, an accurate estimation of the covariance matrix of clustering statistics, not to mention the sampling of parameter space, requires usage of a large number (hundreds in the most favourable cases of simulated (mock galaxy catalogs. Combination of few N-body simulations with a large number of realizations performed with approximate methods gives the most promising approach to solve these problems with a reasonable amount of resources. In this paper I review this topic, starting from the foundations of the methods, then going through the pioneering efforts of the 1990s, and finally presenting the latest extensions and a few codes that are now being used in present-generation surveys and thoroughly tested to assess their performance in the context of future surveys.

  2. Development of new methods in modern selective organic synthesis: preparation of functionalized molecules with atomic precision

    International Nuclear Information System (INIS)

    Ananikov, V P; Khemchyan, L L; Ivanova, Yu V; Dilman, A D; Levin, V V; Bukhtiyarov, V I; Sorokin, A M; Prosvirin, I P; Romanenko, A V; Simonov, P A; Vatsadze, S Z; Medved'ko, A V; Nuriev, V N; Nenajdenko, V G; Shmatova, O I; Muzalevskiy, V M; Koptyug, I V; Kovtunov, K V; Zhivonitko, V V; Likholobov, V A

    2014-01-01

    The challenges of the modern society and the growing demand of high-technology sectors of industrial production bring about a new phase in the development of organic synthesis. A cutting edge of modern synthetic methods is introduction of functional groups and more complex structural units into organic molecules with unprecedented control over the course of chemical transformation. Analysis of the state-of-the-art achievements in selective organic synthesis indicates the appearance of a new trend — the synthesis of organic molecules, biologically active compounds, pharmaceutical substances and smart materials with absolute selectivity. Most advanced approaches to organic synthesis anticipated in the near future can be defined as 'atomic precision' in chemical reactions. The present review considers selective methods of organic synthesis suitable for transformation of complex functionalized molecules under mild conditions. Selected key trends in the modern organic synthesis are considered including the preparation of organofluorine compounds, catalytic cross-coupling and oxidative cross-coupling reactions, atom-economic addition reactions, methathesis processes, oxidation and reduction reactions, synthesis of heterocyclic compounds, design of new homogeneous and heterogeneous catalytic systems, application of photocatalysis, scaling up synthetic procedures to industrial level and development of new approaches to investigation of mechanisms of catalytic reactions. The bibliography includes 840 references

  3. The Analysis of Height System Definition and the High Precision GNSS Replacing Leveling Method

    Directory of Open Access Journals (Sweden)

    ZHANG Chuanyin

    2017-08-01

    Full Text Available Based on the definition of height system, the gravitational equipotential property of height datum surface is discussed in this paper, differences of the heights at ground points that defined in different height systems are tested and analyzed as well. A new method for replacing leveling using GNSS is proposed to ensure the consistency between GNSS replacing leveling and spirit leveling at mm accuracy level. The main conclusions include:①For determining normal height at centimeter accuracy level, the datum surface of normal height should be the geoid. The 1985 national height datum of China adopts normal height system, its datum surface is the geoid passing the Qingdao zero point.②The surface of equi-orthometric height in the near earth space is parallel to the geoid. The combination of GNSS precise positioning and geoid model can be directly used for orthometric height determination. However, the normal height system is more advantageous for describing the terrain and relief.③Based on the proposed method of GNSS replacing leveling, the errors in geodetic height affect more on normal height result than the errors of geoid model, the former is about 1.5 times of the latter.

  4. Evaluation of precision and accuracy of neutron activation analysis method of environmental samples analysis

    International Nuclear Information System (INIS)

    Wardani, Sri; Rina M, Th.; L, Dyah

    2000-01-01

    Evaluation of precision and accuracy of Neutron Activation Analysis (NAA) method used by P2TRR performed by analyzed the standard reference samples from the National Institute of Environmental Study of Japan (NIES-CRM No.10 (rice flour) and the National Bureau of USA (NBS-SRM 1573a (tomato leave) by NAA method. In analyze the environmental SRM No.10 by NAA method in qualitatively could identified multi elements of contents, namely: Br, Ca, Co, CI, Cs, Gd, I, K< La, Mg, Mn, Na, Pa, Sb, Sm, Sr, Ta, Th, and Zn (19 elements) for SRM 1573a; As, Br, Cr, CI, Ce, Co, Cs, Fe, Ga, Hg, K, Mn, Mg, Mo, Na, Ni, Pb, Rb, Sr, Se, Sc, Sb, Ti, and Zn, (25 elements) for CRM No.10a; Ag, As, Br, Cr, CI, Ce, Cd, Co, Cs, Eu, Fe, Ga, Hg, K, Mg, Mn, Mo, Na, Nb, Pb, Rb, Sb, Sc, Th, TI, and Zn, (26 elements) for CRM No. 10b; As, Br, Co, CI, Ce, Cd, Ga, Hg, K, Mn, Mg, Mo, Na, Nb, Pb, Rb, Sb, Se, TI, and Zn (20 elementary) for CRM No.10c. In the quantitatively analysis could determined only some element of sample contents, namely: As, Co, Cd, Mo, Mn, and Zn. From the result compared with NIES or NBS values attained with deviation of 3% ∼ 15%. Overall, the result shown that the method and facilities have a good capability, but the irradiation facility and the software of spectrometry gamma ray necessary to developing or seriously research perform

  5. A comprehensive method for evaluating precision of transfer alignment on a moving base

    Science.gov (United States)

    Yin, Hongliang; Xu, Bo; Liu, Dezheng

    2017-09-01

    In this study, we propose the use of the Degree of Alignment (DOA) in engineering applications for evaluating the precision of and identifying the transfer alignment on a moving base. First, we derive the statistical formula on the basis of estimations. Next, we design a scheme for evaluating the transfer alignment on a moving base, for which the attitude error cannot be directly measured. Then, we build a mathematic estimation model and discuss Fixed Point Smoothing (FPS), Returns to Scale (RTS), Inverted Sequence Recursive Estimation (ISRE), and Kalman filter estimation methods, which can be used when evaluating alignment accuracy. Our theoretical calculations and simulated analyses show that the DOA reflects not only the alignment time and accuracy but also differences in the maneuver schemes, and is suitable for use as an integrated evaluation index. Furthermore, all four of these algorithms can be used to identify the transfer alignment and evaluate its accuracy. We recommend RTS in particular for engineering applications. Generalized DOAs should be calculated according to the tactical requirements.

  6. Proceedings of the International Conference on Applications of High Precision Atomic and Nuclear Methods

    International Nuclear Information System (INIS)

    Olariu, Agata; Stenstroem, Kristina; Hellborg, Ragnar

    2005-01-01

    This volume presents the Proceedings of the International Conference on Applications of High Precision Atomic and Nuclear Methods, held in Neptun, Romania from 2nd to 6th of September 2002. The conference was organized by The Center of Excellence of the European Commission: Inter-Disciplinary Research and Applications based on Nuclear and Atomic Physics (IDRANAP) from Horia Hulubei National Institute for Physics and Nuclear Engineering, IFIN-HH, Bucharest-Magurele, Romania. The meeting gathered 66 participants from 25 different laboratories in 11 countries, namely: Belgium, Bulgaria, France, Germany, Hungary, Poland, Portugal, Romania, Slovakia and Sweden. Non European delegate came from Japan. The topics covered by the conference were as follows: - Environment: air, water and soil pollution, pollution with heavy elements and with radioisotopes, bio-monitoring (10 papers); - Radionuclide metrology (10 papers); - Ion beam based techniques for characterization of materials surface, ERDA, PIXE, PIGE, computer simulations, materials modifications, wear, corrosion (10 papers) ; - Accelerator Mass Spectrometry and applications in environment, archaeology, and medicine (7 papers); - Application of neutron spectrometry in condensed matter (1 paper); - Advanced techniques, facilities and applications (11). Seventeen invited speakers covered through overview talks the main parts of these topics. The book contains the overview talks, oral contributions and poster contributions

  7. Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis

    Science.gov (United States)

    Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro

    2017-04-01

    The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup

  8. A TIMS-based method for the high precision measurements of the three-isotope potassium composition of small samples

    DEFF Research Database (Denmark)

    Wielandt, Daniel Kim Peel; Bizzarro, Martin

    2011-01-01

    A novel thermal ionization mass spectrometry (TIMS) method for the three-isotope analysis of K has been developed, and ion chromatographic methods for the separation of K have been adapted for the processing of small samples. The precise measurement of K-isotopes is challenged by the presence of ...

  9. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Science.gov (United States)

    2010-07-01

    ... criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM....584 What are the precision and accuracy criteria for approval of test methods for determining the... available gravimetric sulfur standard in the range of 1-10 ppm sulfur shall not differ from the accepted...

  10. A modified precise integration method based on Magnus expansion for transient response analysis of time varying dynamical structure

    International Nuclear Information System (INIS)

    Yue, Cong; Ren, Xingmin; Yang, Yongfeng; Deng, Wangqun

    2016-01-01

    This paper provides a precise and efficacious methodology for manifesting forced vibration response with respect to the time-variant linear rotational structure subjected to unbalanced excitation. A modified algorithm based on time step precise integration method and Magnus expansion is developed for instantaneous dynamic problems. The iterative solution is achieved by the ideology of transition and dimensional increment matrix. Numerical examples on a typical accelerating rotation system considering gyroscopic moment and mass unbalance force comparatively demonstrate the validity, effectiveness and accuracy with Newmark-β method. It is shown that the proposed algorithm has high accuracy without loss efficiency.

  11. Endoscopic clipping for gastrointestinal tumors. A method to define the target volume more precisely

    International Nuclear Information System (INIS)

    Riepl, M.; Klautke, G.; Fehr, R.; Fietkau, R.; Pietsch, A.

    2000-01-01

    Background: In many cases it is not possible to exactly define the extension of carcinoma of the gastrointestinal tract with the help of computertomography scans made for 3-D-radiation treatment planning. Consequently, the planning of external beam radiotherapy is made more difficult for the gross tumor volume as well as, in some cases, also for the clinical target volume. Patients and Methods: Eleven patients with macrosocpic tumors (rectal cancer n = 5, cardiac cancer n = 6) were included. Just before 3-D planning, the oral and aboral border of the tumor was marked endoscopically with hemoclips. Subsequently, CT scans for radiotherapy planning were made and the clinical target volume was defined. Five to 6 weeks thereafter, new CT scans were done to define the gross tumor volume for boost planning. Two investigators independently assessed the influence of the hemoclips on the different planning volumes, and whether the number of clips was sufficient to define the gross tumor volume. Results: In all patients, the implantation of the clips was done without complications. Start of radiotherapy was not delayed. With the help of the clips it was possible to exactly define the position and the extension of the primary tumor. The clinical target volume was modified according to the position of the clips in 5/11 patients; the gross tumor volume was modified in 7/11 patients. The use of the clips made the documentation and verification of the treatment portals by the simulator easier. Moreover, the clips helped the surgeon to define the primary tumor region following marked regression after neoadjuvant therapy in 3 patients. Conclusions: Endoscopic clipping of gastrointestinal tumors helps to define the tumor volumes more precisely in radiation therapy. The clips are easily recognized on the portal films and, thus, contribute to quality control. (orig.) [de

  12. Development of precise analytical methods for strontium and lanthanide isotopic ratios using multiple collector inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Ohno, Takeshi; Takaku, Yuichi; Hisamatsu, Shun'ichi

    2007-01-01

    We have developed precise analytical methods for strontium and lanthanide isotopic ratios using multiple collector-ICP-mass spectrometry (MC-ICP-MS) for experimental and environmental studies of their behavior. In order to obtain precise isotopic data using MC-ICP-MS, the mass discrimination effect was corrected by an exponential law correction method. The resulting isotopic data demonstrated that highly precise isotopic analyses (better than 0.1 per mille as 2SD) could be achieved. We also adopted a de-solvating nebulizer system to improve the sensitivity. This system could minimize the water load into the plasma and provided about five times larger intensity of analyte than a conventional nebulizer system did. (author)

  13. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    Science.gov (United States)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  14. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    Science.gov (United States)

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of

  15. Precise determination of sodium in serum by simulated isotope dilution method of inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Yan Ying; Zhang Chuanbao; Zhao Haijian; Chen Wenxiang; Shen Ziyu; Wang Xiaoru; Chen Dengyun

    2007-01-01

    A new precise and accurate method for the determination of sodium in serum by inductively coupled plasma mass spectrometry (ICP-MS) was developed. Since 23 Na is the single isotope element, 27 Al is selected as simulated isotope of Na. Al is spiked into serum samples and Na standard solution. 23 Na/ 27 Al ratio in the Na standard solution is determined to assume the natural Na isotope ratio. The serums samples are digested by purified HNO 3 /H 2 O 2 and diluted to get about 0.6 μg·g -1 Al solutions, and the 23 Na/ 27 Al ratios of the serum samples are obtained to calculate the accurate Na concentrations basing on the isotope dilution method. When the simulated isotope dilution method of ICP-MS is applied and Al is selected as the simulated isotope of Na, the precise and accurate Na concentrations in the serums are determined. The inter-day precision of CV<0.13% for one same serum sample is obtained during 3 days 4 measurements. The spike recoveries are between 99.69% and 100.60% for 4 different serum samples and 3 days multi-measurements. The results of measuring standard reference materials of serum sodium are agree with the certified value. The relative difference between 3 days is 0.22%-0.65%, and the relative difference in one bottle is 0.15%-0.44%. The ICP-MS and Al simulated isotope dilution method is proved to be not only precise and accurate, but also quick and convenient for measuring Na in serum. It is promising to be a reference method for precise determination of Na in serum. Since Al is a low cost isotope dilution reagent, the method is possible to be widely applied for serum Na determination. (authors)

  16. A New Method of High-Precision Positioning for an Indoor Pseudolite without Using the Known Point Initialization.

    Science.gov (United States)

    Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe

    2018-06-20

    Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter

  17. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    Science.gov (United States)

    Du, Q.; Xie, D.; Sun, Y.

    2015-06-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other one is Position and Orientation System (POS) data supported aerotriangulation. The high precision registration points are selected as Ground Control Points (GCPs) instead of measuring GCPs manually during aerotriangulation. The registration experiments indicate that the method which registering aerial images and LiDAR points has a great advantage in higher automation and precision compare with manual registration.

  18. Standard guide for preparing and interpreting precision and bias statements in test method standards used in the nuclear industry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1992-01-01

    1.1 This guide covers terminology useful for the preparation and interpretation of precision and bias statements. 1.2 In formulating precision and bias statements, it is important to understand the statistical concepts involved and to identify the major sources of variation that affect results. Appendix X1 provides a brief summary of these concepts. 1.3 To illustrate the statistical concepts and to demonstrate some sources of variation, a hypothetical data set has been analyzed in Appendix X2. Reference to this example is made throughout this guide. 1.4 It is difficult and at times impossible to ship nuclear materials for interlaboratory testing. Thus, precision statements for test methods relating to nuclear materials will ordinarily reflect only within-laboratory variation.

  19. In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions

    OpenAIRE

    Ender, Andreas; Attin, Thomas; Mehl, Albert

    2016-01-01

    STATEMENT OF PROBLEM: Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. PURPOSE: The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. MATERIAL AND METHODS: Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsilox...

  20. A highly precise frequency-based method for estimating the tension of an inclined cable with unknown boundary conditions

    Science.gov (United States)

    Ma, Lin

    2017-11-01

    This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.

  1. Development of Precise Point Positioning Method Using Global Positioning System Measurements

    Directory of Open Access Journals (Sweden)

    Byung-Kyu Choi

    2011-09-01

    Full Text Available Precise point positioning (PPP is increasingly used in several parts such as monitoring of crustal movement and maintaining an international terrestrial reference frame using global positioning system (GPS measurements. An accuracy of PPP data processing has been increased due to the use of the more precise satellite orbit/clock products. In this study we developed PPP algorithm that utilizes data collected by a GPS receiver. The measurement error modelling including the tropospheric error and the tidal model in data processing was considered to improve the positioning accuracy. The extended Kalman filter has been also employed to estimate the state parameters such as positioning information and float ambiguities. For the verification, we compared our results to other of International GNSS Service analysis center. As a result, the mean errors of the estimated position on the East-West, North-South and Up-Down direction for the five days were 0.9 cm, 0.32 cm, and 1.14 cm in 95% confidence level.

  2. Method and system for detecting, in real time, the imbalance of the head in a high-precision rotary mechanism

    OpenAIRE

    Toro Matamoros, Raúl Mario del; Schmittdiel, Michael Charles; Haber Guerra, Rodolfo E.

    2008-01-01

    [EN] The invention relates to a method for detecting, in real time, an imbalance of the head in a high-precision rotary mechanism, and to the system for carrying out said method. The method comprises the following steps: a) the signal X(t) corresponding to the acceleration of the vibrations of the head is acquired by means of an acquisition means at a sampling rate FS; and b) it is determined, from the signal X(t) obtained, whether the head is imbalanced.

  3. Best, Useful and Objective Precisions for Information Retrieval of Three Search Methods in PubMed and iPubMed

    Directory of Open Access Journals (Sweden)

    Somayyeh Nadi Ravandi

    2016-10-01

    Full Text Available MEDLINE is one of the valuable sources of medical information on the Internet. Among the different open access sites of MEDLINE, PubMed is the best-known site. In 2010, iPubMed was established with an interaction-fuzzy search method for MEDLINE access. In the present work, we aimed to compare the precision of the retrieved sources (Best, Useful and Objective precision in the PubMed and iPubMed using two search methods (simple and MeSH search in PubMed and interaction-fuzzy method in iPubmed. During our semi-empirical study period, we held training workshops for 61 students of higher education to teach them Simple Search, MeSH Search, and Fuzzy-Interaction Search methods. Then, the precision of 305 searches for each method prepared by the students was calculated on the basis of Best precision, Useful precision, and Objective precision formulas. Analyses were done in SPSS version 11.5 using the Friedman and Wilcoxon Test, and three precisions obtained with the three precision formulas were studied for the three search methods. The mean precision of the interaction-fuzzy Search method was higher than that of the simple search and MeSH search for all three types of precision, i.e., Best precision, Useful precision, and Objective precision, and the Simple search method was in the next rank, and their mean precisions were significantly different (P < 0.001. The precision of the interaction-fuzzy search method in iPubmed was investigated for the first time. Also for the first time, three types of precision were evaluated in PubMed and iPubmed. The results showed that the Interaction-Fuzzy search method is more precise than using the natural language search (simple search and MeSH search, and users of this method found papers that were more related to their queries; even though search in Pubmed is useful, it is important that users apply new search methods to obtain the best results.

  4. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    International Nuclear Information System (INIS)

    Dodson, R.; Rioja, M.; Imai, H.; Asaki, Y.; Hong, X.-Y.; Shen, Z.

    2013-01-01

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 m in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.

  5. THE APPLICATION OF MULTIVIEW METHODS FOR HIGH-PRECISION ASTROMETRIC SPACE VLBI AT LOW FREQUENCIES

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, R.; Rioja, M.; Imai, H. [International Centre for Radio Astronomy Research, M468, University of Western Australia, 35 Stirling Hwy, Crawley, Western Australia 6009 (Australia); Asaki, Y. [Institute of Space and Astronautical Science, 3-1-1 Yoshinodai, Chuou, Sagamihara, Kanagawa 252-5210 (Japan); Hong, X.-Y.; Shen, Z., E-mail: richard.dodson@icrar.org [Shanghai Astronomical Observatory, CAS, 200030 Shanghai (China)

    2013-06-15

    High-precision astrometric space very long baseline interferometry (S-VLBI) at the low end of the conventional frequency range, i.e., 20 cm, is a requirement for a number of high-priority science goals. These are headlined by obtaining trigonometric parallax distances to pulsars in pulsar-black hole pairs and OH masers anywhere in the Milky Way and the Magellanic Clouds. We propose a solution for the most difficult technical problems in S-VLBI by the MultiView approach where multiple sources, separated by several degrees on the sky, are observed simultaneously. We simulated a number of challenging S-VLBI configurations, with orbit errors up to 8 m in size and with ionospheric atmospheres consistent with poor conditions. In these simulations we performed MultiView analysis to achieve the required science goals. This approach removes the need for beam switching requiring a Control Moment Gyro, and the space and ground infrastructure required for high-quality orbit reconstruction of a space-based radio telescope. This will dramatically reduce the complexity of S-VLBI missions which implement the phase-referencing technique.

  6. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  7. Precision manufacturing

    CERN Document Server

    Dornfeld, David

    2008-01-01

    Today there is a high demand for high-precision products. The manufacturing processes are now highly sophisticated and derive from a specialized genre called precision engineering. Precision Manufacturing provides an introduction to precision engineering and manufacturing with an emphasis on the design and performance of precision machines and machine tools, metrology, tooling elements, machine structures, sources of error, precision machining processes and precision process planning. As well as discussing the critical role precision machine design for manufacturing has had in technological developments over the last few hundred years. In addition, the influence of sustainable manufacturing requirements in precision processes is introduced. Drawing upon years of practical experience and using numerous examples and illustrative applications, David Dornfeld and Dae-Eun Lee cover precision manufacturing as it applies to: The importance of measurement and metrology in the context of Precision Manufacturing. Th...

  8. A task specific uncertainty analysis method for least-squares-based form characterization of ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Ren, M J; Cheung, C F; Kong, L B

    2012-01-01

    In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement

  9. Validation of the Cristallini Sampling Method for UF6 by High Precision Double-Spike Measurements

    OpenAIRE

    RICHTER STEPHAN; JAKOBSSON ULF; HIESS JOE; AMARAGGI D.

    2017-01-01

    The so-called "Cristallini Method" for sampling of UF6 by adsorption and hydrolysis in alumina pellets inside a fluorothene P-10 tube was developed by the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC) several years ago. This method has several advantages compared to the currently used sampling method, for which UF6 is distilled into a stainless steel tube for transportation, with hydrolysis and isotopic analysis being performed after shipping to the analyt...

  10. The ALFA (Activity Log Files Aggregation) Toolkit: A Method for Precise Observation of the Consultation

    OpenAIRE

    de Lusignan, Simon; Kumarapeli, Pushpa; Chan, Tom; Pflug, Bernhard; van Vlymen, Jeremy; Jones, Beryl; Freeman, George K

    2008-01-01

    Background There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. Objective To develop a tool kit to measure the impact of different EPR system features on the consultation. Methods We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, in...

  11. Sternal instability measured with radiostereometric analysis. A study of method feasibility, accuracy and precision

    DEFF Research Database (Denmark)

    Vestergaard, Rikke Falsig; Søballe, Kjeld; Hasenkam, John Michael

    2018-01-01

    BACKGROUND: A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. METHODS: Four bone analogs (phantoms) were sterno...... modality feasible for clinical evaluation of sternal stability in research. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT02738437 , retrospectively registered.......BACKGROUND: A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. METHODS: Four bone analogs (phantoms) were...

  12. Selection of Suitable DNA Extraction Methods for Genetically Modified Maize 3272, and Development and Evaluation of an Event-Specific Quantitative PCR Method for 3272.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.

  13. A computer-based method for precise detection and calculation of affected skin areas

    DEFF Research Database (Denmark)

    Henriksen, Sille Mølvig; Nybing, Janus Damm; Bouert, Rasmus

    2016-01-01

    BACKGROUND: The aim of this study was to describe and validate a method to obtain reproducible and comparable results concerning extension of a specific skin area, unaffected by individual differences in body surface area. METHODS: A phantom simulating the human torso was equipped with three irre...

  14. Method for Cleanly and Precisely Breaking Off a Rock Core Using a Radial Compressive Force

    Science.gov (United States)

    Richardson, Megan; Lin, Justin

    2011-01-01

    The Mars Sample Return mission has the goal to drill, break off, and retain rock core samples. After some results gained from rock core mechanics testing, the realization that scoring teeth would cleanly break off the core after only a few millimeters of penetration, and noting that rocks are weak in tension, the idea was developed to use symmetric wedging teeth in compression to weaken and then break the core at the contact plane. This concept was developed as a response to the break-off and retention requirements. The wedges wrap around the estimated average diameter of the core to get as many contact locations as possible, and are then pushed inward, radially, through the core towards one another. This starts a crack and begins to apply opposing forces inside the core to propagate the crack across the plane of contact. The advantage is in the simplicity. Only two teeth are needed to break five varieties of Mars-like rock cores with limited penetration and reasonable forces. Its major advantage is that it does not require any length of rock to be attached to the parent in order to break the core at the desired location. Test data shows that some rocks break off on their own into segments or break off into discs. This idea would grab and retain a disc, push some discs upward and others out, or grab a segment, break it at the contact plane, and retain the portion inside of the device. It also does this with few moving parts in a simple, space-efficient design. This discovery could be implemented into a coring drill bit to precisely break off and retain any size rock core.

  15. Influence of The Difference of Perception and Kinesthetic Exercise Methods Against Precision Hit The Ball Softball

    Directory of Open Access Journals (Sweden)

    Fajar Rokhayah

    2017-06-01

    Full Text Available The purpose of this study was to determine: 1 The difference between the effects of training methods and the gradual striking distance striking distance remains as to the accuracy of hitting the ball Softball. 2 The difference in accuracy influence Softball hitting the ball between the athletes who have a good kinesthetic perception, kinesthetic perception was, and kinesthetic perception less. 3 The effect of interaction between training methods with kinesthetic perception as to the accuracy of hitting the ball Softball. This study used an experimental method with 2x3 factorial design. The results of this study were: 1 There is a significant difference between the gradual striking distance training methods and training methods remain striking distance of the ability to hit a softball with the result of the acquisition value p-value = 0.027 smaller than 0.05. 2 There is a significant difference between athletes who have a kinesthetic perception of good, moderate, lacking the ability to hit a softball with the result of the acquisition value p-value = 0.000, which is smaller than 0.05. 3 There is an interaction between striking distance training methods and kinesthetic perception of the ability to hit a softball with the result of the acquisition value p-value = 0.000, which is smaller than 0.05 The conclusion of this study were: 1 Gradually striking distance training methods have a better effect than the fixed striking distance training methods. 2 Athletes who have less kinesthetic perception has better results than the athletes who have good kinesthetic perception and being. 3 There is an interaction between striking distance training methods and kinesthetic perception of the ability to hit a softball.

  16. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    Directory of Open Access Journals (Sweden)

    Emma Wells

    Full Text Available To prevent transmission in Ebola Virus Disease (EVD outbreaks, it is recommended to disinfect living things (hands and people with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH, sodium dichloroisocyanurate (NaDCC, and sodium hypochlorite (NaOCl have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1 determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2 conducting volunteer testing to assess ease-of-use; and, 3 determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method, then DPD dilution methods (2.4-19% error, then test strips (5.2-48% error; precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources, and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed. Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration

  17. Search for transient ultralight dark matter signatures with networks of precision measurement devices using a Bayesian statistics method

    Science.gov (United States)

    Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.

    2018-04-01

    We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.

  18. Correction method for the error of diamond tool's radius in ultra-precision cutting

    Science.gov (United States)

    Wang, Yi; Yu, Jing-chi

    2010-10-01

    The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.

  19. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding

    Directory of Open Access Journals (Sweden)

    Jinle Zeng

    2016-09-01

    Full Text Available During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process.

  20. [Development and validation of event-specific quantitative PCR method for genetically modified maize LY038].

    Science.gov (United States)

    Mano, Junichi; Masubuchi, Tomoko; Hatano, Shuko; Futo, Satoshi; Koiwa, Tomohiro; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Takabatake, Reona; Kitta, Kazumi

    2013-01-01

    In this article, we report a novel real-time PCR-based analytical method for quantitation of the GM maize event LY038. We designed LY038-specific and maize endogenous reference DNA-specific PCR amplifications. After confirming the specificity and linearity of the LY038-specific PCR amplification, we determined the conversion factor required to calculate the weight-based content of GM organism (GMO) in a multilaboratory evaluation. Finally, in order to validate the developed method, an interlaboratory collaborative trial according to the internationally harmonized guidelines was performed with blind DNA samples containing LY038 at the mixing levels of 0, 0.5, 1.0, 5.0 and 10.0%. The precision of the method was evaluated as the RSD of reproducibility (RSDR), and the values obtained were all less than 25%. The limit of quantitation of the method was judged to be 0.5% based on the definition of ISO 24276 guideline. The results from the collaborative trial suggested that the developed quantitative method would be suitable for practical testing of LY038 maize.

  1. A continuous flow isotope ratio mass spectrometry method for high precision determination of dissolved gas ratios and isotopic composition

    DEFF Research Database (Denmark)

    Charoenpong, C. N.; Bristow, L. A.; Altabet, M. A.

    2014-01-01

    ratio mass spectrometer (IRMS). A continuous flow of He carrier gas completely degasses the sample, and passes through the preparation and purification system before entering the IRMS for analysis. The use of this continuous He carrier permits short analysis times (less than 8 min per sample......) as compared with current high-precision methods. In addition to reference gases, calibration is achieved using air-equilibrated water standards of known temperature and salinity. Assessment of reference gas injections, air equilibrated standards, as well as samples collected in the field shows the accuracy...

  2. A new digital method for high precision neutron-gamma discrimination with liquid scintillation detectors

    International Nuclear Information System (INIS)

    Nakhostin, M

    2013-01-01

    A new pulse-shape discrimination algorithm for neutron and gamma (n/γ) discrimination with liquid scintillation detectors has been developed, leading to a considerable improvement of n/γ separation quality. The method is based on triangular pulse shaping which offers a high sensitivity to the shape of input pulses, as well as, excellent noise filtering characteristics. A clear separation of neutrons and γ-rays down to a scintillation light yield of about 65 keVee (electron equivalent energy) with a dynamic range of 45:1 was achieved. The method can potentially operate at high counting rates and is well suited for real-time measurements.

  3. Simple and Reliable Method to Estimate the Fingertip Static Coefficient of Friction in Precision Grip.

    Science.gov (United States)

    Barrea, Allan; Bulens, David Cordova; Lefevre, Philippe; Thonnard, Jean-Louis

    2016-01-01

    The static coefficient of friction (µ static ) plays an important role in dexterous object manipulation. Minimal normal force (i.e., grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), µ static actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between µ static and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip µ static at different NF levels, as well as an algorithm for determining µ static from measured forces and torques. Our method is based on active, back-and-forth movements of a subject's finger on the surface of a fixed six-axis force and torque sensor. µ static is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between µ static and NF. Our method allows the continuous estimation of µ static as a function of NF during dexterous manipulation, based on the relationship between µ static and NF measured before manipulation.

  4. Parallel Störmer-Cowell methods for high-precision orbit computations

    NARCIS (Netherlands)

    P.J. van der Houwen; E. Messina; J.J.B. de Swart (Jacques)

    1998-01-01

    textabstractMany orbit problems in celestial mechanics are described by (nonstiff) initial-value problems (IVPs) for second-order ordinary differential equations of the form $y' = {bf f (y)$. The most successful integration methods are based on high-order Runge-Kutta-Nyström formulas. However, these

  5. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    Science.gov (United States)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these

  6. A Precise Method for Cloth Configuration Parsing Applied to Single-Arm Flattening

    Directory of Open Access Journals (Sweden)

    Li Sun

    2016-04-01

    Full Text Available In this paper, we investigate the contribution that visual perception affords to a robotic manipulation task in which a crumpled garment is flattened by eliminating visually detected wrinkles. In order to explore and validate visually guided clothing manipulation in a repeatable and controlled environment, we have developed a hand-eye interactive virtual robot manipulation system that incorporates a clothing simulator to close the effector-garment-visual sensing interaction loop. We present the technical details and compare the performance of two different methods for detecting, representing and interpreting wrinkles within clothing surfaces captured in high-resolution depth maps. The first method we present relies upon a clustering-based method for localizing and parametrizing wrinkles, while the second method adopts a more advanced geometry-based approach in which shape-topology analysis underpins the identification of the cloth configuration (i.e., maps wrinkles. Having interpreted the state of the cloth configuration by means of either of these methods, a heuristic-based flattening strategy is then executed to infer the appropriate forces, their directions and gripper contact locations that must be applied to the cloth in order to flatten the perceived wrinkles. A greedy approach, which attempts to flatten the largest detected wrinkle for each perception-iteration cycle, has been successfully adopted in this work. We present the results of our heuristic-based flattening methodology which relies upon clustering-based and geometry-based features respectively. Our experiments indicate that geometry-based features have the potential to provide a greater degree of clothing configuration understanding and, as a consequence, improve flattening performance. The results of experiments using a real robot (as opposed to simulated robot also confirm our proposition that a more effective visual perception system can advance the performance of cloth

  7. A routine high-precision method for Lu-Hf isotope geochemistry and chronology

    Science.gov (United States)

    Patchett, P.J.; Tatsumoto, M.

    1981-01-01

    A method for chemical separation of Lu and Hf from rock, meteorite and mineral samples is described, together with a much improved mass spectrometric running technique for Hf. This allows (i) geo- and cosmochronology using the176Lu???176Hf+??- decay scheme, and (ii) geochemical studies of planetary processes in the earth and moon. Chemical yields for the three-stage ion-exchange column procedure average 90% for Hf. Chemical blanks are international mass spectrometric standard; suitable aliquots, prepared from a single batch of JMC 475, are available from Denver. Lu-Hf analyses of the standard rocks BCR-1 and JB-1 are given. The potential of the Lu-Hf method in isotope geochemistry is assessed. ?? 1980 Springer-Verlag.

  8. Assessing the optimized precision of the aircraft mass balance method for measurement of urban greenhouse gas emission rates through averaging

    Directory of Open Access Journals (Sweden)

    Alexie M. F. Heimburger

    2017-06-01

    Full Text Available To effectively address climate change, aggressive mitigation policies need to be implemented to reduce greenhouse gas emissions. Anthropogenic carbon emissions are mostly generated from urban environments, where human activities are spatially concentrated. Improvements in uncertainty determinations and precision of measurement techniques are critical to permit accurate and precise tracking of emissions changes relative to the reduction targets. As part of the INFLUX project, we quantified carbon dioxide (CO2, carbon monoxide (CO and methane (CH4 emission rates for the city of Indianapolis by averaging results from nine aircraft-based mass balance experiments performed in November-December 2014. Our goal was to assess the achievable precision of the aircraft-based mass balance method through averaging, assuming constant CO2, CH4 and CO emissions during a three-week field campaign in late fall. The averaging method leads to an emission rate of 14,600 mol/s for CO2, assumed to be largely fossil-derived for this period of the year, and 108 mol/s for CO. The relative standard error of the mean is 17% and 16%, for CO2 and CO, respectively, at the 95% confidence level (CL, i.e. a more than 2-fold improvement from the previous estimate of ~40% for single-flight measurements for Indianapolis. For CH4, the averaged emission rate is 67 mol/s, while the standard error of the mean at 95% CL is large, i.e. ±60%. Given the results for CO2 and CO for the same flight data, we conclude that this much larger scatter in the observed CH4 emission rate is most likely due to variability of CH4 emissions, suggesting that the assumption of constant daily emissions is not correct for CH4 sources. This work shows that repeated measurements using aircraft-based mass balance methods can yield sufficient precision of the mean to inform emissions reduction efforts by detecting changes over time in urban emissions.

  9. Research on a new type of precision cropping method with variable frequency vibration

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Aiming at the cropping operations widely applied in practical industry production, a new method of bar cropping is presented. The rotational speeds of actuating motor of eccentric blocks are controlled by a frequency-changer, and the shearing die provides the bar with the controllable force, frequency and amplitude of vibration. By utilizing the stress concentration at the bottom of V shape groove on the bar, the low stress bar cropping is realized. The bar cropping experiments of duralumin alloy and steel ...

  10. Mobility shift affinity capillary electrophoresis - A fast and precise method for testing ligand influences on proteins

    OpenAIRE

    Redweik, Sabine

    2013-01-01

    Interaktionen von Proteinen und verschiedenen Liganden können mit der Mobility Shift Affinitätskapillarelektrophorese untersucht werden. Hierbei werden in verschiedenen Messungen Änderungen der Proteinmobilität durch einen Liganden in Abhängigkeit der Ligandenkonzentration untersucht. Das Trennprinzip der Mobility Shift Affinitätskapillarelektrophorese beruht auf der Kapillarzonenelektrophorese, so dass sich hierdurch einige Vor- und Nachteile dieser Methode ergeben. Wichtigster Vorteil vergl...

  11. Precision of GNSS instruments by static method comparing in real time

    Directory of Open Access Journals (Sweden)

    Slavomír Labant

    2009-09-01

    Full Text Available Tablet paper describes comparison of measuring accuracy two apparatus from the firm Leica. One of them recieve signals onlyfrom GPS satelites and another instrument is working with GPS and also with GLONASS satelites. Measuring is carry out by RTK staticmethod with 2 minutes observations. Measurement processing is separated to X, Y (position and h (heigh. Adjustment of directobservations is used as a adjusting method.

  12. Development of Simple and Precise Method of Arginine Determination in Rumen Fluid by Spectrophotometer

    International Nuclear Information System (INIS)

    Chacher, B.; Marghazani, I. B.; Liu, J. X.; Liu, H. Y.

    2015-01-01

    The objective of current study was to build up a convenient, economic and accurate procedure to determine arginine (ARG) concentration in rumen fluid. Rumen fluid was collected from 3 rumen fistulated Chinese Holstein dairy cows and added with or without (control) 1mmol/l unprotected ARG and blank (with only medium) in to syringe system in triplicate as a replicate. All syringes were incubated in water bath at 39 Degree C for 0, 2, 4, 6, 12 and 24 h and were terminated to measure the ARG concentration. Sakaguchi reaction method was used to analyze the ARG concentration in rumen fluid by determining the rumen degradation rate of protected and unprotected ARG. Temperature, time and absorbance were optimized in the procedure based on Sakaguchi reaction. Color consistency remained 4-6 min. The optimum temperature (0-5) Degree C was observed for maximum optical density 0.663 at wave length 500 nm. Minimum ARG that could be determined in rumen fluid by spectrophotometer was 4-5 μ g/ml. No significance (P>0.05) difference were observed between two results derived from spectrophotometer and amino acid analyzer methods. In conclusion, the spectrophotometer method of ARG determination in rumen fluid based on Sakaguchi reaction is easy, accurate, and economical and could be useful in learning ARG metabolism in the rumen. (author)

  13. High-precision pose measurement method in wind tunnels based on laser-aided vision technology

    Directory of Open Access Journals (Sweden)

    Liu Wei

    2015-08-01

    Full Text Available The measurement of position and attitude parameters for the isolated target from a high-speed aircraft is a great challenge in the field of wind tunnel simulation technology. In this paper, firstly, an image acquisition method for small high-speed targets with multi-dimensional movement in wind tunnel environment is proposed based on laser-aided vision technology. Combining with the trajectory simulation of the isolated model, the reasonably distributed laser stripes and self-luminous markers are utilized to capture clear images of the object. Then, after image processing, feature extraction, stereo correspondence and reconstruction, three-dimensional information of laser stripes and self-luminous markers are calculated. Besides, a pose solution method based on projected laser stripes and self-luminous markers is proposed. Finally, simulation experiments on measuring the position and attitude of high-speed rolling targets are conducted, as well as accuracy verification experiments. Experimental results indicate that the proposed method is feasible and efficient for measuring the pose parameters of rolling targets in wind tunnels.

  14. Assessing total nitrogen in surface-water samples--precision and bias of analytical and computational methods

    Science.gov (United States)

    Rus, David L.; Patton, Charles J.; Mueller, David K.; Crawford, Charles G.

    2013-01-01

    The characterization of total-nitrogen (TN) concentrations is an important component of many surface-water-quality programs. However, three widely used methods for the determination of total nitrogen—(1) derived from the alkaline-persulfate digestion of whole-water samples (TN-A); (2) calculated as the sum of total Kjeldahl nitrogen and dissolved nitrate plus nitrite (TN-K); and (3) calculated as the sum of dissolved nitrogen and particulate nitrogen (TN-C)—all include inherent limitations. A digestion process is intended to convert multiple species of nitrogen that are present in the sample into one measureable species, but this process may introduce bias. TN-A results can be negatively biased in the presence of suspended sediment, and TN-K data can be positively biased in the presence of elevated nitrate because some nitrate is reduced to ammonia and is therefore counted twice in the computation of total nitrogen. Furthermore, TN-C may not be subject to bias but is comparatively imprecise. In this study, the effects of suspended-sediment and nitrate concentrations on the performance of these TN methods were assessed using synthetic samples developed in a laboratory as well as a series of stream samples. A 2007 laboratory experiment measured TN-A and TN-K in nutrient-fortified solutions that had been mixed with varying amounts of sediment-reference materials. This experiment identified a connection between suspended sediment and negative bias in TN-A and detected positive bias in TN-K in the presence of elevated nitrate. A 2009–10 synoptic-field study used samples from 77 stream-sampling sites to confirm that these biases were present in the field samples and evaluated the precision and bias of TN methods. The precision of TN-C and TN-K depended on the precision and relative amounts of the TN-component species used in their respective TN computations. Particulate nitrogen had an average variability (as determined by the relative standard deviation) of 13

  15. Transform methods for precision continuum and control models of flexible space structures

    Science.gov (United States)

    Lupi, Victor D.; Turner, James D.; Chun, Hon M.

    1991-01-01

    An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.

  16. High-precision surface formation method and the 3-D shaded display of the brain obtained from CT images

    International Nuclear Information System (INIS)

    Niki, Noboru; Fukuda, Hiroshi

    1987-01-01

    Our aim is to display the precise 3-D appearance of the brain based on data provided by CT images. For this purpose, we have developed a method of precisely forming surfaces from brain contours. The method expresses the brain surface as the sum of several partial surfaces. Each partial surface is individually constructed from respective parts of brain contours. The brain surface is finally made up of a superposition of partial surfaces. Two surface formation algorithms based on this principle are presented. One expresses the brain surface as the sum of a brain outline surface and sulcus surfaces. The other expresses the brain surface as the sum of surfaces in the same part of the brain. The effectiveness of these algorithms is shown by evaluation of contours obtained from dog and human brain samples and CT images. The latter algorithm is shown to be superior for high-resolution CT images. Optional cut-away views of the brain constructed by these algorithms are also shown. (author)

  17. Precision measurement of the e+e- → π+π-(γ) cross-section with ISR method

    International Nuclear Information System (INIS)

    Wang, L.L.

    2009-05-01

    Vacuum polarization integral involves the vector spectral functions which can be experimentally determined. As the dominant uncertainty source to the integral, the precision measurement of the cross section of e + e - → π + π - (γ) as a function of energy from 2π threshold to 3 GeV is performed by taking the ratio of e + e - → π + π - (γ) cross section to e + e - → μ + μ - (γ) cross section which are both measured with BABAR data using ISR method in one analysis. Besides that taking the ratio of the cross sections of the two processes can cancel several systematic uncertainties, the acceptance differences between data and Monte Carlo results are measured using the same data, and the corresponding corrections are applied on the efficiencies predicted by Monte Carlo method which can control the uncertainties. The achieved final uncertainty of the born cross section of e + e - → π + π - (γ) in ρ mass region (0.6 ∼ 0.9 GeV) is 0.54%. As a consequence of the new vacuum polarization calculation using the new precision result of the e + e - π + π - (γ) cross section, the impact on the standard model prediction of muon anomalous magnetic moment g - 2 is presented, which is also compared with other data based predictions and direct measurement. (author)

  18. A METHOD FOR SELF-CALIBRATION IN SATELLITE WITH HIGH PRECISION OF SPACE LINEAR ARRAY CAMERA

    Directory of Open Access Journals (Sweden)

    W. Liu

    2016-06-01

    Full Text Available At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera’s change regulation can be mastered accurately and the camera’s attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  19. Energy exchangers with LCT as a precision method for diet control in LCHADD.

    Science.gov (United States)

    Mozrzymas, Renata; Konikowska, Klaudia; Regulska-Ilow, Bożena

    2017-01-01

    Long-chain 3-hydroxyacyl-CoA dehydrogenase deficiency (LCHADD) is a rare genetic disease. The LCHADD treatment is mainly based on special diet. In this diet, energy from long-chain triglycerides (LCT) cannot exceed 10%, however energy intake from the consumption of medium-chain triglycerides (MCTs) should increase. The daily intake of energy should be compatible with energy requirements and treatment should involve frequent meals including during the night to avoid periods of fasting. In fact, there are no recommendations for total content of LCT in all of the allowed food in the LCHADD diet. The aim of the study was to present a new method of diet composition in LCHADD with the use of blocks based on energy exchangers with calculated LCT content. In the study, the diet schema was shown for calculating the energy requirements and LCT content in the LCHADD diet. How to create the diet was also shown, based on a food pyramid developed for patients with LCHADD. The blocks will make it possible, in a quick and simple way, to create a balanced diet which provides adequate energy value, essential nutrients and LCT content. This method can be used by doctors and dietitians who specialize in treating rare metabolic diseases. It can also be used by patients and their families for accurate menu planning with limited LCT content.

  20. The development of precisely analytical method for the concentrated boric acid solution in the NPP systems

    Energy Technology Data Exchange (ETDEWEB)

    Sung, G. B.; Jung, K. H.; Kang, D. W. [KEPRI, Taejon (Korea, Republic of); Park, C. S. [KEPCO, Taejon (Korea, Republic of)

    1999-05-01

    Boric acid is used for reactivity control in nuclear reactors, which frequently results in leftover boric acid. This extra boric acid is stored in boric acid storage tank after the concentration process by boric acid evaporator. Apart from this excess, highly concentrated boric acid is stored in safety-related boric acid storage tank. Accordingly, proper maintenance of these boric acid is one of the greatest safety concerns. The solubility of boric acid decreases with decreasing temperature resulting in its precipitation. Consequently, the temperature of boric acid storage tanks is maintained at high temperature. The following analysis should be also performed at the similar temperature to prevent the formation of boric acid precipitation, which is difficult to achieve affecting the accuracy of analytical results. This paper presents a new sampling and measuring technique that makes up for the difficulties mentioned above and shows several advantages including improved reliability and short analysis time. This method is based on gravimetry and dilution method and is expected to be widely used in field application.

  1. A flexible fluorescence correlation spectroscopy based method for quantification of the DNA double labeling efficiency with precision control

    International Nuclear Information System (INIS)

    Hou, Sen; Tabaka, Marcin; Sun, Lili; Trochimczyk, Piotr; Kaminski, Tomasz S; Kalwarczyk, Tomasz; Zhang, Xuzhu; Holyst, Robert

    2014-01-01

    We developed a laser-based method to quantify the double labeling efficiency of double-stranded DNA (dsDNA) in a fluorescent dsDNA pool with fluorescence correlation spectroscopy (FCS). Though, for quantitative biochemistry, accurate measurement of this parameter is of critical importance, before our work it was almost impossible to quantify what percentage of DNA is doubly labeled with the same dye. The dsDNA is produced by annealing complementary single-stranded DNA (ssDNA) labeled with the same dye at 5′ end. Due to imperfect ssDNA labeling, the resulting dsDNA is a mixture of doubly labeled dsDNA, singly labeled dsDNA and unlabeled dsDNA. Our method allows the percentage of doubly labeled dsDNA in the total fluorescent dsDNA pool to be measured. In this method, we excite the imperfectly labeled dsDNA sample in a focal volume of <1 fL with a laser beam and correlate the fluctuations of the fluorescence signal to get the FCS autocorrelation curves; we express the amplitudes of the autocorrelation function as a function of the DNA labeling efficiency; we perform a comparative analysis of a dsDNA sample and a reference dsDNA sample, which is prepared by increasing the total dsDNA concentration c (c > 1) times by adding unlabeled ssDNA during the annealing process. The method is flexible in that it allows for the selection of the reference sample and the c value can be adjusted as needed for a specific study. We express the precision of the method as a function of the ssDNA labeling efficiency or the dsDNA double labeling efficiency. The measurement precision can be controlled by changing the c value. (letter)

  2. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy

    CERN Document Server

    Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F

    2010-01-01

    Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...

  3. AFEAP cloning: a precise and efficient method for large DNA sequence assembly.

    Science.gov (United States)

    Zeng, Fanli; Zang, Jinping; Zhang, Suhua; Hao, Zhimin; Dong, Jingao; Lin, Yibin

    2017-11-14

    Recent development of DNA assembly technologies has spurred myriad advances in synthetic biology, but new tools are always required for complicated scenarios. Here, we have developed an alternative DNA assembly method named AFEAP cloning (Assembly of Fragment Ends After PCR), which allows scarless, modular, and reliable construction of biological pathways and circuits from basic genetic parts. The AFEAP method requires two-round of PCRs followed by ligation of the sticky ends of DNA fragments. The first PCR yields linear DNA fragments and is followed by a second asymmetric (one primer) PCR and subsequent annealing that inserts overlapping overhangs at both sides of each DNA fragment. The overlapping overhangs of the neighboring DNA fragments annealed and the nick was sealed by T4 DNA ligase, followed by bacterial transformation to yield the desired plasmids. We characterized the capability and limitations of new developed AFEAP cloning and demonstrated its application to assemble DNA with varying scenarios. Under the optimized conditions, AFEAP cloning allows assembly of an 8 kb plasmid from 1-13 fragments with high accuracy (between 80 and 100%), and 8.0, 11.6, 19.6, 28, and 35.6 kb plasmids from five fragments at 91.67, 91.67, 88.33, 86.33, and 81.67% fidelity, respectively. AFEAP cloning also is capable to construct bacterial artificial chromosome (BAC, 200 kb) with a fidelity of 46.7%. AFEAP cloning provides a powerful, efficient, seamless, and sequence-independent DNA assembly tool for multiple fragments up to 13 and large DNA up to 200 kb that expands synthetic biologist's toolbox.

  4. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  5. Precise measurement of the top-quark mass at the CMS experiment using the ideogram method

    International Nuclear Information System (INIS)

    Seidel, Markus

    2015-08-01

    The mass of the top quark is measured using a sample of t anti t candidate events with one electron or muon and at least four jets in the final state, collected by CMS in proton-proton collisions at √(s)=8 TeV at the LHC. The candidate events are selected from data corresponding to an integrated luminosity of 19.7 fb -1 . For each event the top-quark mass is reconstructed from a kinematic fit of the decay products to a t anti t hypothesis. In order to minimize the uncertainties from jet energy corrections, the top-quark mass is determined simultaneously with a jet energy scale factor (JSF), constrained by the known mass of the W boson decaying to quark-antiquark pairs. A joint likelihood fit taking into account multiple interpretations per event - the ideogram method - is used. From the simultaneous fit, a top-quark mass of 172.15±0.19(stat.+JSF)±0.61(syst.) GeV is obtained. Using an additional constraint from the determination of the jet energy scale in γ/Z+jet events yields m t =172.38±0.16(stat.+JSF)±0.49(syst.) GeV. The results are discussed in the context of different event generator implementations. Possible kinematic biases are studied by performing the measurement in different regions of the phase space.

  6. Precise measurement of the top-quark mass at the CMS experiment using the ideogram method

    CERN Document Server

    Seidel, Markus; Stadie, Hartmut

    2015-01-01

    The mass of the top quark is measured using a sample of tt candidate events with oneelectron or muon and at least four jets in the final state, collected by CMS in proton√proton collisions at s = 8 TeV at the LHC. The candidate events are selected fromdata corresponding to an integrated luminosity of 19.7 fb−1 . For each event thetop-quark mass is reconstructed from a kinematic fit of the decay products to att hypothesis. In order to minimize the uncertainties from jet energy corrections,the top-quark mass is determined simultaneously with a jet energy scale factor(JSF), constrained by the known mass of the W boson decaying to quark-antiquarkpairs. A joint likelihood fit taking into account multiple interpretations per event– the ideogram method – is used. From the simultaneous fit, a top-quark massof 172.15 ± 0.19 (stat.+JSF) ± 0.61 (syst.) GeV is obtained. Using an additionalconstraint from the determination of the jet energy scale in γ/Z+jet events yieldsmt = 172.38 ± 0.16 (stat.+JSF) ± 0....

  7. Precision Agriculture Design Method Using a Distributed Computing Architecture on Internet of Things Context

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ferrández-Pastor

    2018-05-01

    Full Text Available The Internet of Things (IoT has opened productive ways to cultivate soil with the use of low-cost hardware (sensors/actuators and communication (Internet technologies. Remote equipment and crop monitoring, predictive analytic, weather forecasting for crops or smart logistics and warehousing are some examples of these new opportunities. Nevertheless, farmers are agriculture experts but, usually, do not have experience in IoT applications. Users who use IoT applications must participate in its design, improving the integration and use. In this work, different industrial agricultural facilities are analysed with farmers and growers to design new functionalities based on IoT paradigms deployment. User-centred design model is used to obtain knowledge and experience in the process of introducing technology in agricultural applications. Internet of things paradigms are used as resources to facilitate the decision making. IoT architecture, operating rules and smart processes are implemented using a distributed model based on edge and fog computing paradigms. A communication architecture is proposed using these technologies. The aim is to help farmers to develop smart systems both, in current and new facilities. Different decision trees to automate the installation, designed by the farmer, can be easily deployed using the method proposed in this document.

  8. A TPC-like readout method for high precision muon-tracking using GEM-detectors

    Energy Technology Data Exchange (ETDEWEB)

    Flierl, Bernhard; Biebel, Otmar; Bortfeldt, Jonathan; Hertenberger, Ralf; Klitzner, Felix; Loesel, Philipp; Mueller, Ralph [Ludwig-Maximilians-Universitaet Muenchen (Germany); Zibell, Andre [Julius-Maximilians-Universitaet Wuerzburg (Germany)

    2016-07-01

    Gaseous electron multiplier (GEM) detectors are well suited for tracking of charged particles. Three dimensional tracking in a single layer can be achieved by application of a time-projection-chamber like readout mode (μTPC), if the drift time of the electrons is measured and the position dependence of the arrival time is used to calculate the inclination angle of the track. To optimize the tracking capabilities for ion tracks drift gas mixtures with low drift velocity have been investigated by measuring tracks of cosmic muons in a compact setup of four GEM-detectors of 100 x 100 x 6 mm{sup 3} active volume each and an angular acceptance of -25 to 25 . The setup consists of three detectors with two-dimensional strip readout layers of 0.4 mm pitch and one detector with a single strip readout layer of 0.25 mm pitch. All strips are readout by APV25 frontend boards and the amplification stage in the detectors consists of three GEM-foils. Tracks are reconstructed by the μTPC-method in one of the detectors and are then compared to the prediction from the other three detectors defined by the center of charge in every detector. We report our study of Argon and Helium based noble gas mixtures with carbon-dioxide as quencher.

  9. Development of precise measurement method of neutron energy for plasma temperature diagnostics in thermonuclear fusion

    International Nuclear Information System (INIS)

    Mori, Chizuo; Gotoh, Junichi; Uritani, Akira; Miyahara, Hiroshi; Ikeda, Yuichiro; Kasugai, Yoshimi; Kaneko, Junichi

    1998-01-01

    There are many types of fast neutron spectrometers for plasma temperature diagnostics, 28 Si(n,α) 25 Mg reaction giving the energy resolution of 2.2% for 14 MeV neutrons, the 12 C(n,α) 9 Be reaction giving the resolution of 2.15%. These detectors, however suffer from radiation damage, which demands to exchange the detector to a new one in every a few month depending on the usage. Recoil proton method has also been developed by using liquid scintillator or plastic scintillator, as a neutron-to-proton converter in front of a Si-detector, which is called counter telescope type, giving a resolution of 4.0%. This type of spectrometer can reduce radiation damage by placing Si-detector at outside Neutron beam. The scintillator can measure the lost energy of protons in the converter (i.e. the scintillator) and the measured energy loss can be used for improving the energy resolution. However, the energy resolution of organic scintillator itself is generally not so good. We proposed to use a proportional counter with CH 4 as counting gas and also as a neutron-proton converter, which has far better energy resolution than plastic scintillators, although the time resolution of counting in proportional counters is generally inferior to that in organic scintillation counters. The characteristics of the new spectrometer were experimentally studied and also were simulated with analytical calculation. (author)

  10. Precision Agriculture Design Method Using a Distributed Computing Architecture on Internet of Things Context.

    Science.gov (United States)

    Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Martínez, José

    2018-05-28

    The Internet of Things (IoT) has opened productive ways to cultivate soil with the use of low-cost hardware (sensors/actuators) and communication (Internet) technologies. Remote equipment and crop monitoring, predictive analytic, weather forecasting for crops or smart logistics and warehousing are some examples of these new opportunities. Nevertheless, farmers are agriculture experts but, usually, do not have experience in IoT applications. Users who use IoT applications must participate in its design, improving the integration and use. In this work, different industrial agricultural facilities are analysed with farmers and growers to design new functionalities based on IoT paradigms deployment. User-centred design model is used to obtain knowledge and experience in the process of introducing technology in agricultural applications. Internet of things paradigms are used as resources to facilitate the decision making. IoT architecture, operating rules and smart processes are implemented using a distributed model based on edge and fog computing paradigms. A communication architecture is proposed using these technologies. The aim is to help farmers to develop smart systems both, in current and new facilities. Different decision trees to automate the installation, designed by the farmer, can be easily deployed using the method proposed in this document.

  11. Method for visualization and presentation of priceless old prints based on precise 3D scan

    Science.gov (United States)

    Bunsch, Eryk; Sitnik, Robert

    2014-02-01

    Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.

  12. A modified time-of-flight method for precise determination of high speed ratios in molecular beams

    Energy Technology Data Exchange (ETDEWEB)

    Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.; Samelin, B.; Holst, B. [Department of Physics and Technology, University of Bergen, Allégaten 55, 5007 Bergen (Norway); Bracco, G. [Department of Physics and Technology, University of Bergen, Allégaten 55, 5007 Bergen (Norway); CNR-IMEM, Department of Physics, University of Genova, V. Dodecaneso 33, 16146 Genova (Italy)

    2016-02-15

    Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beam using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.

  13. An improved gravity compensation method for high-precision free-INS based on MEC–BP–AdaBoost

    International Nuclear Information System (INIS)

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Li, Jing

    2016-01-01

    In recent years, with the rapid improvement of inertial sensors (accelerometers and gyroscopes), gravity compensation has become more important for improving navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper proposes a mind evolutionary computation (MEC) back propagation (BP) AdaBoost algorithm neural-network-based gravity compensation method that estimates the gravity disturbance on the track based on measured gravity data. A MEC–BP–AdaBoost network-based gravity compensation algorithm used in the training process to establish the prediction model takes the carrier position (longitude and latitude) provided by INS as the input data and the gravity disturbance as the output data, and then compensates the obtained gravity disturbance into the INS’s error equations to restrain the position error propagation. The MEC–BP–AdaBoost algorithm can not only effectively avoid BP neural networks being trapped in local extrema, but also perfectly solve the nonlinearity between the input and output data that cannot be solved by traditional interpolation methods, such as least-square collocation (LSC) interpolation. The accuracy and feasibility of the proposed interpolation method are verified through numerical tests. A comparison of several other compensation methods applied in field experiments, including LSC interpolation and traditional BP interpolation, highlights the superior performance of the proposed method. The field experiment results show that the maximum value of the position error can reduce by 28% with the proposed gravity compensation method. (paper)

  14. Accurate and precise DNA quantification in the presence of different amplification efficiencies using an improved Cy0 method.

    Science.gov (United States)

    Guescini, Michele; Sisti, Davide; Rocchi, Marco B L; Panebianco, Renato; Tibollo, Pasquale; Stocchi, Vilberto

    2013-01-01

    Quantitative real-time PCR represents a highly sensitive and powerful technology for the quantification of DNA. Although real-time PCR is well accepted as the gold standard in nucleic acid quantification, there is a largely unexplored area of experimental conditions that limit the application of the Ct method. As an alternative, our research team has recently proposed the Cy0 method, which can compensate for small amplification variations among the samples being compared. However, when there is a marked decrease in amplification efficiency, the Cy0 is impaired, hence determining reaction efficiency is essential to achieve a reliable quantification. The proposed improvement in Cy0 is based on the use of the kinetic parameters calculated in the curve inflection point to compensate for efficiency variations. Three experimental models were used: inhibition of primer extension, non-optimal primer annealing and a very small biological sample. In all these models, the improved Cy0 method increased quantification accuracy up to about 500% without affecting precision. Furthermore, the stability of this procedure was enhanced integrating it with the SOD method. In short, the improved Cy0 method represents a simple yet powerful approach for reliable DNA quantification even in the presence of marked efficiency variations.

  15. International comparison of methods to test the validity of dead-time and pile-up corrections for high-precision. gamma. -ray spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Houtermans, H.; Schaerf, K.; Reichel, F. (International Atomic Energy Agency, Vienna (Austria)); Debertin, K. (Physikalisch-Technische Bundesanstalt, Braunschweig (Germany, F.R.))

    1983-02-01

    The International Atomic Energy Agency organized an international comparison of methods applied in high-precision ..gamma..-ray spectrometry for the correction of dead-time and pile-up losses. Results of this comparison are reported and discussed.

  16. Statistical methods for conducting agreement (comparison of clinical tests) and precision (repeatability or reproducibility) studies in optometry and ophthalmology.

    Science.gov (United States)

    McAlinden, Colm; Khadka, Jyoti; Pesudovs, Konrad

    2011-07-01

    The ever-expanding choice of ocular metrology and imaging equipment has driven research into the validity of their measurements. Consequently, studies of the agreement between two instruments or clinical tests have proliferated in the ophthalmic literature. It is important that researchers apply the appropriate statistical tests in agreement studies. Correlation coefficients are hazardous and should be avoided. The 'limits of agreement' method originally proposed by Altman and Bland in 1983 is the statistical procedure of choice. Its step-by-step use and practical considerations in relation to optometry and ophthalmology are detailed in addition to sample size considerations and statistical approaches to precision (repeatability or reproducibility) estimates. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  17. Application of Taguchi method to optimization of surface roughness during precise turning of NiTi shape memory alloy

    Science.gov (United States)

    Kowalczyk, M.

    2017-08-01

    This paper describes the research results of surface quality research after the NiTi shape memory alloy (Nitinol) precise turning by the tools with edges made of polycrystalline diamonds (PCD). Nitinol, a nearly equiatomic nickel-titanium shape memory alloy, has wide applications in the arms industry, military, medicine and aerospace industry, and industrial robots. Due to their specific properties NiTi alloys are known to be difficult-to-machine materials particularly by using conventional techniques. The research trials were conducted for three independent parameters (vc, f, ap) affecting the surface roughness were analyzed. The choice of parameter configurations were performed by factorial design methods using orthogonal plan type L9, with three control factors, changing on three levels, developed by G. Taguchi. S/N ratio and ANOVA analyses were performed to identify the best of cutting parameters influencing surface roughness.

  18. Study on Maritime Logistics Warehousing Center Model and Precision Marketing Strategy Optimization Based on Fuzzy Method and Neural Network Model

    Directory of Open Access Journals (Sweden)

    Xiao Kefeng

    2017-08-01

    Full Text Available The bulk commodity, different with the retail goods, has a uniqueness in the location selection, the chosen of transportation program and the decision objectives. How to make optimal decisions in the facility location, requirement distribution, shipping methods and the route selection and establish an effective distribution system to reduce the cost has become a burning issue for the e-commerce logistics, which is worthy to be deeply and systematically solved. In this paper, Logistics warehousing center model and precision marketing strategy optimization based on fuzzy method and neural network model is proposed to solve this problem. In addition, we have designed principles of the fuzzy method and neural network model to solve the proposed model because of its complexity. Finally, we have solved numerous examples to compare the results of lingo and Matlab, we use Matlab and lingo just to check the result and to illustrate the numerical example, we can find from the result, the multi-objective model increases logistics costs and improves the efficiency of distribution time.

  19. Why precision?

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes

    2012-05-15

    Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.

  20. Why precision?

    International Nuclear Information System (INIS)

    Bluemlein, Johannes

    2012-05-01

    Precision measurements together with exact theoretical calculations have led to steady progress in fundamental physics. A brief survey is given on recent developments and current achievements in the field of perturbative precision calculations in the Standard Model of the Elementary Particles and their application in current high energy collider data analyses.

  1. PSYCHE CPMG-HSQMBC: An NMR Spectroscopic Method for Precise and Simple Measurement of Long-Range Heteronuclear Coupling Constants.

    Science.gov (United States)

    Timári, István; Szilágyi, László; Kövér, Katalin E

    2015-09-28

    Among the NMR spectroscopic parameters, long-range heteronuclear coupling constants convey invaluable information on torsion angles relevant to glycosidic linkages of carbohydrates. A broadband homonuclear decoupled PSYCHE CPMG-HSQMBC method for the precise and direct measurement of multiple-bond heteronuclear couplings is presented. The PSYCHE scheme built into the pulse sequence efficiently eliminates unwanted proton-proton splittings from the heteronuclear multiplets so that the desired heteronuclear couplings can be determined simply by measuring frequency differences between peak maxima of pure antiphase doublets. Moreover, PSYCHE CPMG-HSQMBC can provide significant improvement in sensitivity as compared to an earlier Zangger-Sterk-based method. Applications of the proposed pulse sequence are demonstrated for the extraction of (n)J((1)H,(77)Se) and (n)J((1)H,(13)C) values, respectively, in carbohydrates; further extensions can be envisioned in any J-based structural and conformational studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Inter-lab comparison of precision and recommended methods for age estimation of Florida manatee (Trichechus manatus latirostris) using growth layer groups in earbones

    OpenAIRE

    Brill, Katherine; Marmontel, Miriam; Bolen-Richardson, Meghan; Stewart, Robert EA

    2016-01-01

    Manatees are routinely aged by counting Growth Layer Groups (GLGs) in periotic bones (earbones). Manatee carcasses recovered in Florida between 1974 and 2010 provided age-estimation material for three readers and formed the base for a retrospective analysis of aging precision (repeatability). All readers were in good agreement (high precision) with the greatest apparent source of variation being the result of earbone remodelling with increasing manatee age. Over the same period, methods of sa...

  3. Analysis of the accuracy of certain methods used for measuring very low reactivities; Analyse de la precision de certaines methodes de mesure de tres basses reactivites

    Energy Technology Data Exchange (ETDEWEB)

    Valat, J; Stern, T E

    1964-07-01

    The rapid measurement of anti-reactivities, in particular very low ones (i.e. a few tens of {beta}) appears to be an interesting method for the automatic start-up a reactor and its optimisation. With this in view, the present report explores the various methods studied essentially from the point of view of the time required for making the measurement with a given statistical accuracy, especially as far as very low activities are concerned. The statistical analysis is applied in turn to: the methods for the natural background noise (auto-correlation and spectral density); the sinusoidal excitation methods for the reactivity or the source, with synchronous detection ; the periodic source excitation method using pulsed neutrons. Finally, the statistical analysis leads to the suggestion of a new method of source excitation using neutronic random square waves combined with an intercorrelation between the random excitation and the resulting output. (authors) [French] La mesure rapide des antireactivites, en particulier celle des tres basses (soit quelques dizaines de {beta}), apparait comme une voie interessante pour le demarrage automatique d'un reacteur et son optimalisation. Dans cette optique, le present rapport explore diverses methodes etudiees essentiellement sous l'angle de la duree de mesure necessaire a une precision relative statistique donnee, plus particulierement en ce qui concerne les tres basses reactivites. L'analyse statistique porte successivement sur: les methodes du bruit de fond naturel (autocorrelation et densite spectrale); les methodes d'excitation sinusoidale de reactivite ou de source, avec detection synchrone; la methode d'excitation periodique de source par neutrons pulses. Enfin l'analyse statistique amene a proposer une methode nouvelle d'excitation de source par creneaux neutroniques aleatoires alliee a une intercorrelation entre l'excitation aleatoire et la sortie resultante. (auteurs)

  4. Methods of preparation of fatty acid methyl esters (FAME. Statistical assessment of the precision characteristics from a collaborative trial

    Directory of Open Access Journals (Sweden)

    Pérez-Camino, M. C.

    2000-12-01

    Full Text Available The official regulations for the control of the olive and olive pomace oils of the European Union (EU and International Olive Oil Council (IOOC include the determination of fatty acids in order to be applied to several purity criteria. The determination of fatty acids require the preparation of the fatty acid methyl esters (FAME for the subsequent analysis by gas chromatography with good precision and reproducibility. Among the methods used in the laboratories of both the industries and the official institutions looking after the olive oil control, the ones selected were: 1 cold methylation with methanolic potash and 2 hot methylation with sodium methylate followed by acidification with sulphuric acid in methanol and heating. A statistical assessment of the precision characteristics were performed on the determination of fatty acids using both methods by a collaborative trial following the directions included in the AOAC regulation (AOAC 1995. In oils with low acidities, the results obtained for both methylation methods were equivalent. However, the olivepomace oil sample (acidity 15.5% showed significative differences between the fatty acid compositions obtained using both methylation methods. Finally, the methylation with the acidic+basic method did not yield an increase of the trans-isomers of the fatty acids.Los métodos oficiales para el control del aceite de oliva y de orujo de oliva de la Unión Europea (UE y del Comité Oleícola Internacional (COI incluyen la determinación de ácidos grasos en la aplicación de varios criterios de pureza. La determinación de ácidos grasos requiere la preparación de los ésteres metílicos de los ácidos grasos (FAME y su posterior análisis mediante cromatografía de gases con una buena repetibilidad y reproducibilidad. Entre los muchos métodos usados por los laboratorios de la industria y de los organismos oficiales de control, se seleccionaron los siguientes: 1 metilación en frío con potasa

  5. Laser Induced Damage of Potassium Dihydrogen Phosphate (KDP Optical Crystal Machined by Water Dissolution Ultra-Precision Polishing Method

    Directory of Open Access Journals (Sweden)

    Yuchuan Chen

    2018-03-01

    Full Text Available Laser induced damage threshold (LIDT is an important optical indicator for nonlinear Potassium Dihydrogen Phosphate (KDP crystal used in high power laser systems. In this study, KDP optical crystals are initially machined with single point diamond turning (SPDT, followed by water dissolution ultra-precision polishing (WDUP and then tested with 355 nm nanosecond pulsed-lasers. Power spectral density (PSD analysis shows that WDUP process eliminates the laser-detrimental spatial frequencies band of micro-waviness on SPDT machined surface and consequently decreases its modulation effect on the laser beams. The laser test results show that LIDT of WDUP machined crystal improves and its stability has a significant increase by 72.1% compared with that of SPDT. Moreover, a subsequent ultrasonic assisted solvent cleaning process is suggested to have a positive effect on the laser performance of machined KDP crystal. Damage crater investigation indicates that the damage morphologies exhibit highly thermal explosion features of melted cores and brittle fractures of periphery material, which can be described with the classic thermal explosion model. The comparison result demonstrates that damage mechanisms for SPDT and WDUP machined crystal are the same and WDUP process reveals the real bulk laser resistance of KDP optical crystal by removing the micro-waviness and subsurface damage on SPDT machined surface. This improvement of WDUP method makes the LIDT more accurate and will be beneficial to the laser performance of KDP crystal.

  6. Automated Method for the Rapid and Precise Estimation of Adherent Cell Culture Characteristics from Phase Contrast Microscopy Images

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-01-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521

  7. A precise time synchronization method for 5G based on radio-over-fiber network with SDN controller

    Science.gov (United States)

    He, Linkuan; Wei, Baoguo; Yang, Hui; Yu, Ao; Wang, Zhengyong; Zhang, Jie

    2018-02-01

    There is an increasing demand on accurate time synchronization with the growing bandwidth of network service for 5G. In 5G network, it's necessary for base station to achieve accurate time synchronization to guarantee the quality of communication. In order to keep accuracy time for 5G network, we propose a time synchronization system for satellite ground station based on radio-over-fiber network (RoFN) with software defined optical network (SDON) controller. The advantage of this method is to improve the accuracy of time synchronization of ground station. The IEEE 1588 time synchronization protocol can solve the problems of high cost and lack of precision. However, in the process of time synchronization, distortion exists during the transmission of digital time signal. RoF uses analog optical transmission links and therefore analog transmission can be implemented among ground stations instead of digital transmission, which means distortion and bandwidth waste in the process of digital synchronization can be avoided. Additionally, the thought of SDN, software defined network, can optimize RoFN with centralized control and simplifying base station. Related simulation had been carried out to prove its superiority.

  8. Precision casting into disposable ceramic mold – a high efficiency method of production of castings of irregular shape

    OpenAIRE

    Уваров, Б. И.; Лущик, П. Е.; Андриц, А. А.; Долгий, Л. П.; Заблоцкий, А. В.

    2016-01-01

    The article shows the advantages and disadvantages of precision casting into disposable ceramic molds. The high quality shaped castings produced by modernized ceramic molding process are proved the reliability and prospects of this advanced technology.

  9. PRECISION CASTING INTO DISPOSABLE CERAMIC MOLD – A HIGH EFFICIENCY METHOD OF PRODUCTION OF CASTINGS OF IRREGULAR SHAPE

    Directory of Open Access Journals (Sweden)

    B. I. Uvarov

    2016-01-01

    Full Text Available The article shows the advantages and disadvantages of precision casting into disposable ceramic molds. The high quality shaped castings produced by modernized ceramic molding process are proved the reliability and prospects of this advanced technology.

  10. Comparison of ATLAS Tilecal MODULE No 8 high-precision metrology measurement results obtained by laser (JINR) and photogrammetric (CERN) methods

    CERN Document Server

    Batusov, V; Gayde, J C; Khubua, J I; Lasseur, C; Lyablin, M V; Miralles-Verge, L; Nessi, Marzio; Rusakovitch, N A; Sissakian, A N; Topilin, N D

    2002-01-01

    The high-precision assembly of large experimental set-ups is of a principal necessity for the successful execution of the forthcoming LHC research programme in the TeV-beams. The creation of an adequate survey and control metrology method is an essential part of the detector construction scenario. This work contains the dimension measurement data for ATLAS hadron calorimeter MODULE No. 8 (6 m, 22 tons) which were obtained by laser and by photogrammetry methods. The comparative data analysis demonstrates the measurements agreement within +or-70 mu m. It means, these two clearly independent methods can be combined and lead to the rise of a new-generation engineering culture: high-precision metrology when precision assembling of large scale massive objects. (3 refs).

  11. Comparison of ATLAS tilecal module No. 8 high-precision metrology measurement results obtained by laser (JINR) and photogrammetric (CERN) methods

    International Nuclear Information System (INIS)

    Batusov, V.; Budagov, Yu.; Gayde, J.C.

    2002-01-01

    The high-precision assembly of large experimental set-ups is of a principal necessity for the successful execution of the forthcoming LHC research programme in the TeV-beams. The creation of an adequate survey and control metrology method is an essential part of the detector construction scenario. This work contains the dimension measurement data for ATLAS hadron calorimeter MODULE No. 8 (6 m, 22 tons) which were obtained by laser and by photogrammetry methods. The comparative data analysis demonstrates the measurements agreement within ± 70 μm. It means, these two clearly independent methods can be combined and lead to the rise of a new-generation engineering culture: high-precision metrology when precision assembling of large scale massive objects

  12. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    Science.gov (United States)

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  13. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    Science.gov (United States)

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder

  14. Accuracy and precision of 3 intraoral scanners and accuracy of conventional impressions: A novel in vivo analysis method.

    Science.gov (United States)

    Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A

    2018-02-01

    To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Development and Validation of a Precise and Stability Indicating LC Method for the Determination of Benzalkonium Chloride in Pharmaceutical Formulation Using an Experimental Design

    Directory of Open Access Journals (Sweden)

    Harshal K. Trivedi

    2010-01-01

    Full Text Available A simple, precise, shorter runtime and stability indicating reverse-phase high performance liquid chromatographic method has been developed and validated for the quantification of benzalkonium chloride (BKC preservative in pharmaceutical formulation of sparfloxacin eye drop. The method was successfully applied for determination of benzalkonium chloride in various ophthalmic formulations like latanoprost, timolol, dexametasone, gatifloxacin, norfloxacin, combination of moxifloxacin and dexamethasone, combination of nepthazoline HCl, zinc sulphate and chlorpheniramine maleate, combination of tobaramycin and dexamethasone, combination of phenylephrine HCl, naphazoline HCl, menthol and camphor. The RP-LC separation was achieved on an Purospher Star RP-18e 75 mm × 4.0 mm, 3.0 μ in the isocratic mode using buffer: acetonitrile (35: 65, v/v, as the mobile phase at a flow rate of 1.8 mL/min. The methods were performed at 215 nm; in LC method, quantification was achieved with PDA detection over the concentration range of 50 to 150 μg/mL. The method is effective to separate four homologs with good resolution in presence of excipients, sparfloxacin and degradable compound due to sparfloxacin and BKC within five minutes. The method was validated and the results were compared statistically. They were found to be simple, accurate, precise and specific. The proposed method was validated in terms of specificity, precision, recovery, solution stability, linearity and range. All the validation parameters were within the acceptance range and concordant to ICH guidelines.

  16. Precision translator

    Science.gov (United States)

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  17. Precision Cosmology

    Science.gov (United States)

    Jones, Bernard J. T.

    2017-04-01

    Preface; Notation and conventions; Part I. 100 Years of Cosmology: 1. Emerging cosmology; 2. The cosmic expansion; 3. The cosmic microwave background; 4. Recent cosmology; Part II. Newtonian Cosmology: 5. Newtonian cosmology; 6. Dark energy cosmological models; 7. The early universe; 8. The inhomogeneous universe; 9. The inflationary universe; Part III. Relativistic Cosmology: 10. Minkowski space; 11. The energy momentum tensor; 12. General relativity; 13. Space-time geometry and calculus; 14. The Einstein field equations; 15. Solutions of the Einstein equations; 16. The Robertson-Walker solution; 17. Congruences, curvature and Raychaudhuri; 18. Observing and measuring the universe; Part IV. The Physics of Matter and Radiation: 19. Physics of the CMB radiation; 20. Recombination of the primeval plasma; 21. CMB polarisation; 22. CMB anisotropy; Part V. Precision Tools for Precision Cosmology: 23. Likelihood; 24. Frequentist hypothesis testing; 25. Statistical inference: Bayesian; 26. CMB data processing; 27. Parametrising the universe; 28. Precision cosmology; 29. Epilogue; Appendix A. SI, CGS and Planck units; Appendix B. Magnitudes and distances; Appendix C. Representing vectors and tensors; Appendix D. The electromagnetic field; Appendix E. Statistical distributions; Appendix F. Functions on a sphere; Appendix G. Acknowledgements; References; Index.

  18. Role of endocortical contouring methods on precision of HR-pQCT-derived cortical micro-architecture in postmenopausal women and young adults.

    Science.gov (United States)

    Kawalilak, C E; Johnston, J D; Cooper, D M L; Olszynski, W P; Kontulainen, S A

    2016-02-01

    Precision errors of cortical bone micro-architecture from high-resolution peripheral quantitative computed tomography (pQCT) ranged from 1 to 16 % and did not differ between automatic or manually modified endocortical contour methods in postmenopausal women or young adults. In postmenopausal women, manually modified contours led to generally higher cortical bone properties when compared to the automated method. First, the objective of the study was to define in vivo precision errors (coefficient of variation root mean square (CV%RMS)) and least significant change (LSC) for cortical bone micro-architecture using two endocortical contouring methods: automatic (AUTO) and manually modified (MOD) in two groups (postmenopausal women and young adults) from high-resolution pQCT (HR-pQCT) scans. Second, it was to compare precision errors and bone outcomes obtained with both methods within and between groups. Using HR-pQCT, we scanned twice the distal radius and tibia of 34 postmenopausal women (mean age ± SD 74 ± 7 years) and 30 young adults (27 ± 9 years). Cortical micro-architecture was determined using AUTO and MOD contour methods. CV%RMS and LSC were calculated. Repeated measures and multivariate ANOVA were used to compare mean CV% and bone outcomes between the methods within and between the groups. Significance was accepted at P young adults, postmenopausal women had better precision for radial cortical porosity (precision difference 9.3 %) and pore volume (7.5 %) with MOD. Young adults had better precision for cortical thickness (0.8 %, MOD) and tibial cortical density (0.2 %, AUTO). In postmenopausal women, MOD resulted in 0.2-54 % higher values for most cortical outcomes, as well as 6-8 % lower radial and tibial cortical BMD and 2 % lower tibial cortical thickness. Results suggest that AUTO and MOD endocortical contour methods provide comparable repeatability. In postmenopausal women, manual modification of endocortical contours led to

  19. Determination of plant stanols and plant sterols in phytosterol enriched foods with a gas chromatographic-flame ionization detection method: NMKL collaborative study.

    Science.gov (United States)

    Laakso, Päivi H

    2014-01-01

    This collaborative study with nine participating laboratories was conducted to determine the total plant sterol and/or plant stanol contents in phytosterol fortified foods with a gas chromatographic method. Four practice and 12 test samples representing mainly commercially available foodstuffs were analyzed as known replicates. Twelve samples were enriched with phytosterols, whereas four samples contained only natural contents of phytosterols. The analytical procedure consisted of two alternative approaches: hot saponification method, and acid hydrolysis treatment prior to hot saponification. As a result, sterol/stanol compositions and contents in the samples were measured. The amounts of total plant sterols and total plant stanols varying from 0.005 to 8.04 g/100 g product were statistically evaluated after outliers were eliminated. The repeatability RSD (RSDr) varied from 1.34 to 17.13%. The reproducibility RSD (RSDR) ranged from 3.03 to 17.70%, with HorRat values ranging from 0.8 to 2.1. When only phytosterol enriched food test samples are considered, the RSDr ranged from 1.48 to 6.13%, the RSD, ranged from 3.03 to 7.74%, and HorRat values ranged from 0.8 to 2.1. Based on the results of this collaborative study, the study coordinator concludes the method is fit for its purpose.

  20. An improved method for high precision measurement of chromium isotopes using double spike MC-ICP-MS

    Science.gov (United States)

    Zhu, J.; Wu, G. L.; Wang, X.; Zhang, L. X.; Han, G.

    2017-12-01

    Chormium(Cr) isotopes have been used to trace pollution processes and reconstruct paleo-redox conditions. However, the precise determination of Cr isotopes has still been challenged due to difficulties in purifying Cr from samples with low Cr content and complex matrices. Here we report an improved four-step column chromatographic procedure to separate Cr from matrix elements. Firstly, Cr in sample solution was mixed with 50Cr-54Cr double spike (the optimized 54Crspike/52Crsample = 0.4 and (50Cr/54Cr)spike = 1.3:1) was completely converted into Cr (III) in 8.5mol/L HCl and loaded onto 2ml of AG50W-X8(200-400m) resin conditioned with 11 mol/L HCl. The 2.65ml of eluent was adjusted to 4.5ml of 6mol/L HCl and immediately loaded onto a Bio-Rad column filled with 2ml of AG1-X8 anion resin (100-200m). These two steps can remove at least 99% of Ca, Fe and most matrix elements. Secondly, the 7.5ml of eluent was dried down and dissolved in 0.1ml of 0.5mol/L HNO3.before adding 2ml 4mol/L HF, which was then loaded onto 1ml of AG1-X8 anion resin (100-200m) to remove Ti and V. Finally, sample was dissolved in 0.1ml of 0.5 mol/L HNO3 and oxidized by 0.5mL 0.2 mol/L (NH4)2S2O8 and 4.4mL H2O, which was then centrifuged to remove Mn oxide, and supernatant was loaded onto AG1-X8 resin to remove SO42-, Ni, Al, Na and some Mg using 8ml H2O and 3ml 2mol/L HCl. Cr was eluted by 2 mol/L HNO3 containing 5% H2O2 and the dried Cr was dissolved in 3% HNO3for isotopic analysis. The total yield to Cr is great than 80% even for samples with low Cr content. Chromium isotopes was measured on a Neptune plus MC-ICP-MS in China University of Geosciences(Beijing). Using our improved method, the δ53/52CrSRM979 values of USGS reference materials BHVO-2, BCR-2 and SGR-1b are -0.12±0.06‰(n=15), -0.09±0.06‰ (n=5), and 0.30±0.06‰ (n=12), respectively, which agree well with previously reported values. The δ53/52CrSRM979 of carbonaceous shale CP0-1 and CP0-12 collected from Hubei, China are 2.05

  1. The application of integrated geophysical methods composed of AMT and high-precision ground magnetic survey to the exploration of granite uranium deposits

    International Nuclear Information System (INIS)

    Qiao Yong; Shen Jingbang; Wu Yong; Wang Zexia

    2014-01-01

    Introduced two methods composed of AMT and high-precision ground magnetic survey were used to the exploration of granite uranium deposits in the Yin gongshan areas middle part of the Nei Monggol. Through experiment of methods and analysis of applicated results, think that AMT have good vertical resolution and could preferably survey thickness of rockmass, position of fracture and deep conditions, space distribution features of fracture zone ect, but it is not clear for rockmass, xenolith of reflection. And high-precision ground magnetic survey could delineate rockmass, xenolith of distribution range and identify the rock contact zone, fracture ect, but it generally measure position and it is not clear for occurrence, extension. That can resolve some geological structures by using the integrated methods and on the basis of sharing their complementary advantages. Effective technological measures are provided to the exploration of deep buried uranium bodies in the granite uranium deposits and outskirt extension of the deposit. (authors)

  2. Inter-lab comparison of precision and recommended methods for age estimation of Florida manatee (Trichechus manatus latirostris using growth layer groups in earbones

    Directory of Open Access Journals (Sweden)

    Katherine Brill

    2016-06-01

    Full Text Available Manatees are routinely aged by counting Growth Layer Groups (GLGs in periotic bones (earbones. Manatee carcasses recovered in Florida between 1974 and 2010 provided age-estimation material for three readers and formed the base for a retrospective analysis of aging precision (repeatability. All readers were in good agreement (high precision with the greatest apparent source of variation being the result of earbone remodelling with increasing manatee age. Over the same period, methods of sample preparation and of determining a final age estimate changed. We examined the effects of altering methods on ease of reading GLGs and found no statistical differences. Accurate age estimates are an important component for effective management of the species and for better models of population trends and we summarize the currently recommended methods for estimating manatee ages using earbones.

  3. Precision Airdrop (Largage de precision)

    Science.gov (United States)

    2005-12-01

    NAVIGATION TO A PRECISION AIRDROP OVERVIEW RTO-AG-300-V24 2 - 9 the point from various compass headings. As the tests are conducted, the resultant...rate. This approach avoids including a magnetic compass for the heading reference, which has difficulties due to local changes in the magnetic field...Scientifica della Difesa ROYAUME-UNI Via XX Settembre 123 Dstl Knowledge Services ESPAGNE 00187 Roma Information Centre, Building 247 SDG TECEN / DGAM

  4. Precision digital control systems

    Science.gov (United States)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  5. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    Science.gov (United States)

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  6. A Precision-Positioning Method for a High-Acceleration Low-Load Mechanism Based on Optimal Spatial and Temporal Distribution of Inertial Energy

    Directory of Open Access Journals (Sweden)

    Xin Chen

    2015-09-01

    Full Text Available High-speed and precision positioning are fundamental requirements for high-acceleration low-load mechanisms in integrated circuit (IC packaging equipment. In this paper, we derive the transient nonlinear dynamicresponse equations of high-acceleration mechanisms, which reveal that stiffness, frequency, damping, and driving frequency are the primary factors. Therefore, we propose a new structural optimization and velocity-planning method for the precision positioning of a high-acceleration mechanism based on optimal spatial and temporal distribution of inertial energy. For structural optimization, we first reviewed the commonly flexible multibody dynamic optimization using equivalent static loads method (ESLM, and then we selected the modified ESLM for optimal spatial distribution of inertial energy; hence, not only the stiffness but also the inertia and frequency of the real modal shapes are considered. For velocity planning, we developed a new velocity-planning method based on nonlinear dynamic-response optimization with varying motion conditions. Our method was verified on a high-acceleration die bonder. The amplitude of residual vibration could be decreased by more than 20% via structural optimization and the positioning time could be reduced by more than 40% via asymmetric variable velocity planning. This method provides an effective theoretical support for the precision positioning of high-acceleration low-load mechanisms.

  7. Dose evaluation using multiple-aliquot quartz OSL: Test of methods and a new protocol for improved accuracy and precision

    DEFF Research Database (Denmark)

    Jain, M.; Bøtter-Jensen, L.; Singhvi, A.K.

    2003-01-01

    -dose-dependent sensitivity changes during the pre-heat, and fundamental variability in the shapes of quartz OSL (blue-green or blue-light stimulated luminescence) decay forms. A new protocol using a combination of 'elevated temperature IR cleaning' (ETIR) and 'component-specific dose normalisation' (CSDN) has been developed....... CSDN accounts for variability in the OSL decay forms and absorbs such sensitivity changes. A combination of ETIR and CSDN protocol increased palaeodose precision from +/-100% to +/-4% in quartz separates from the fluvially transported sands in the Thar desert. A comparison with palaeodose estimates...

  8. Precision of Nest Method in Estimating Orangutan Population and Determination of Important Ecological Factors for Management of Conservation Forest

    Directory of Open Access Journals (Sweden)

    Yanto Santosa

    2012-04-01

    Full Text Available Orangutan as an umbrella species is closely interlinked with sustainable forest management meaning that the protection of this species has implications on the protection of other species and maintain ecosystem stability.  The total natural habitat required to support orangutan’s population could only be determined by the appropriate population size. It is associated with the carrying capacity to accommodate or fulfill the habitat requirements of a wildlife population. Selection and delineation of core and wilderness zones as habitat preference should be based on the results of preference test shown by the spatial distribution of orangutan population. Value of the coefficient  of  variation (CV was used to observe the precision of the population estimation and to identify important ecological factors in selection of nesting trees.  The study resulted in varied CV spatial values for various habitat types: 22.60%,  11.20%, and 13.30% for heath, lowland dipterocarp, and peat swamp forest, respectively. In the other side, CV temporal values for various habitat types were 5.35%, 22.60%, and 17.60% for heath, lowland dipterocarp, and peat swamp forest, respectively. This indicated that the population density in each type of forest ecosystems had a variation based on location and did not varied according to time of survey.  The use of  nest survey technique showed good reliable results in estimating orangutan population density.  Efforts to improve the precision of estimation can be done by formulating r value as the harmonic average of nest production rates and t as the average of nest decay time per nest category. Selection of habitat preference and nest trees were influenced by food availability thus should form important consideration in conducting nest survey to avoid bias in estimating orangutan populations.Keywords: conservation forest management, nest survey, orangutan, population size, ecological factors

  9. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    Science.gov (United States)

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (psmartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (psmartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (psmartphone system (psmartphone and McMaster counts did not have statistically different accuracies

  10. Validation of an analytical method for simultaneous high-precision measurements of greenhouse gas emissions from wastewater treatment plants using a gas chromatography-barrier discharge detector system.

    Science.gov (United States)

    Pascale, Raffaella; Caivano, Marianna; Buchicchio, Alessandro; Mancini, Ignazio M; Bianco, Giuliana; Caniani, Donatella

    2017-01-13

    Wastewater treatment plants (WWTPs) emit CO 2 and N 2 O, which may lead to climate change and global warming. Over the last few years, awareness of greenhouse gas (GHG) emissions from WWTPs has increased. Moreover, the development of valid, reliable, and high-throughput analytical methods for simultaneous gas analysis is an essential requirement for environmental applications. In the present study, an analytical method based on a gas chromatograph (GC) equipped with a barrier ionization discharge (BID) detector was developed for the first time. This new method simultaneously analyses CO 2 and N 2 O and has a precision, measured in terms of relative standard of variation RSD%, equal to or less than 6.6% and 5.1%, respectively. The method's detection limits are 5.3ppm v for CO 2 and 62.0ppb v for N 2 O. The method's selectivity, linearity, accuracy, repeatability, intermediate precision, limit of detection and limit of quantification were good at trace concentration levels. After validation, the method was applied to a real case of N 2 O and CO 2 emissions from a WWTP, confirming its suitability as a standard procedure for simultaneous GHG analysis in environmental samples containing CO 2 levels less than 12,000mg/L. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. High-precision quadruple isotope dilution method for simultaneous determination of nitrite and nitrate in seawater by GCMS after derivatization with triethyloxonium tetrafluoroborate

    Energy Technology Data Exchange (ETDEWEB)

    Pagliano, Enea, E-mail: enea.pagliano@nrc-cnrc.gc.ca; Meija, Juris; Mester, Zoltán

    2014-05-01

    Highlights: • High-precision determination of nitrite and nitrate in seawater. • Use of quadruple isotope dilution. • Aqueous Et₃O⁺BF₄]⁻ derivatization chemistry for GCMS analysis of nitrite and nitrate. Abstract: Quadruple isotope dilution mass spectrometry (ID⁴MS) has been applied for simultaneous determination of nitrite and nitrate in seawater. ID⁴MS allows high-precision measurements and entails the use of isotopic internal standards (¹⁸O-nitrite and ¹⁵N-nitrate). We include a tutorial on ID⁴MS outlining optimal experimental design which generates results with low uncertainties and obviates the need for direct (separate) evaluation of the procedural blank. Nitrite and nitrate detection was achieved using a headspace GCMS procedure based on single-step aqueous derivatization with triethyloxonium tetrafluoroborate at room temperature. In this paper the sample preparation was revised and fundamental aspects of this chemistry are presented. The proposed method has detection limits in the low parts-per-billion for both analytes, is reliable, precise, and has been validated using a seawater certified reference material (MOOS-2). Simplicity of the experimental design, low detection limits, and the use of quadruple isotope dilution makes the present method superior to the state-of-the-art for determination of nitrite and nitrate, and an ideal candidate for reference measurements of these analytes in seawater.

  12. Verification on the use of the Inoue method for precisely determining glomerular filtration rate in Philippine pediatrics

    Science.gov (United States)

    Magcase, M. J. D. J.; Duyan, A. Q.; Carpio, J.; Carbonell, C. A.; Trono, J. D.

    2015-06-01

    The objective of this study is to validate the Inoue method so that it would be the preferential choice in determining glomerular filtration rate (GFR) in Philippine pediatrics. The study consisted of 36 patients ranging from ages 2 months to 19 years old. The subjects used were those who were previously subjected to in-vitro method. The scintigrams of the invitro method was obtained and processed for split percentage uptake and for parameters needed to obtain Inoue GFR. The result of this paper correlates the Inoue GFR and In-vitro method (r = 0.926). Thus, Inoue method is a viable, simple, and practical technique in determining GFR in pediatric patients.

  13. Development and Validation of a Precise Method for Determination of Benzalkonium Chloride (BKC Preservative, in Pharmaceutical Formulation of Latanoprost Eye Drops

    Directory of Open Access Journals (Sweden)

    J. Mehta

    2010-01-01

    Full Text Available A simple and precise reversed phase high performance liquid chromatographic method has been developed and validated for the quantification of benzalkonium chloride (BKC preservative in pharmaceutical formulation of latanoprost eye drops. The analyte was chromatographed on a Waters Spherisorb CN, (4.6×250 mm column packed with particles of 5 μm. The mobile phase, optimized through an experimental design, was a 40:60 (v/v mixture of potassium dihydrogen orthophosphate buffer (pH 5.5 and acetonitrile, pumped at a flow rate of 1.0 mL/min at maintaining column temperature at 30 °C. Maximum UV detection was achieved at 210 nm. The method was validated in terms of linearity, repeatability, intermediate precision and method accuracy. The method was shown to be robust, resisting to small deliberate changes in pH, flow rate and composition (organic ratio of the mobile phase. The method was successfully applied for the determination of BKC in a pharmaceutical formulation of latanoprost ophthalmic solution without any interference from common excipients and drug substance. All the validation parameters were within the acceptance range, concordant to ICH guidelines.

  14. A new measurement method of actual focal spot position of an x-ray tube using a high-precision carbon-interspaced grid

    Science.gov (United States)

    Lee, H. W.; Lim, H. W.; Jeon, D. H.; Park, C. K.; Cho, H. S.; Seo, C. W.; Lee, D. Y.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Woo, T. H.; Oh, J. E.

    2018-06-01

    This study investigated the effectiveness of a new method for measuring the actual focal spot position of a diagnostic x-ray tube using a high-precision antiscatter grid and a digital x-ray detector in which grid magnification, which is directly related to the focal spot position, was determined from the Fourier spectrum of the acquired x-ray grid’s image. A systematic experiment was performed to demonstrate the viability of the proposed measurement method. The hardware system used in the experiment consisted of an x-ray tube run at 50 kVp and 1 mA, a flat-panel detector with a pixel size of 49.5 µm, and a high-precision carbon-interspaced grid with a strip density of 200 lines/inch. The results indicated that the focal spot of the x-ray tube (Jupiter 5000, Oxford Instruments) used in the experiment was located approximately 31.10 mm inside from the exit flange, well agreed with the nominal value of 31.05 mm, which demonstrates the viability of the proposed measurement method. Thus, the proposed method can be utilized for system’s performance optimization in many x-ray imaging applications.

  15. A new method for precise determination of iron, zinc and cadmium stable isotope ratios in seawater by double-spike mass spectrometry.

    Science.gov (United States)

    Conway, Tim M; Rosenberg, Angela D; Adkins, Jess F; John, Seth G

    2013-09-02

    The study of Fe, Zn and Cd stable isotopes (δ(56)Fe, δ(66)Zn and δ(114)Cd) in seawater is a new field, which promises to elucidate the marine cycling of these bioactive trace metals. However, the analytical challenges posed by the low concentration of these metals in seawater has meant that previous studies have typically required large sample volumes, highly limiting data collection in the oceans. Here, we present the first simultaneous method for the determination of these three isotope systems in seawater, using Nobias PA-1 chelating resin to extract metals from seawater, purification by anion exchange chromatography, and analysis by double spike MC-ICPMS. This method is designed for use on only a single litre of seawater and has blanks of 0.3, 0.06 and <0.03 ng for Fe, Zn and Cd respectively, representing a 1-20 fold reduction in sample size and a 4-130 decrease in blank compared to previously reported methods. The procedure yields data with high precision for all three elements (typically 0.02-0.2‰; 1σ internal precision), allowing us to distinguish natural variability in the oceans, which spans 1-3‰ for all three isotope systems. Simultaneous extraction and purification of three metals makes this method ideal for high-resolution, large-scale endeavours such as the GEOTRACES program. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Immersion transmission ellipsometry (ITE): a new method for the precise determination of the 3D indicatrix of thin films

    Science.gov (United States)

    Jung, C. C.; Stumpe, J.

    2005-02-01

    The new method of immersion transmission ellipsometry (ITE) [1] has been developed. It allows the highly accurate determination of the absolute three-dimensional (3D) refractive indices of anisotropic thin films. The method is combined with conventional ellipsometry in transmission and reflection, and the thickness determination of anisotropic films solely by optical methods also becomes more accurate. The method is applied to the determination of the 3D refractive indices of thin spin-coated films of an azobenzene-containing liquid-crystalline copolymer. The development of the anisotropy in these films by photo-orientation and subsequent annealing is demonstrated. Depending on the annealing temperature, oblate or prolate orders are generated.

  17. The most precise computations using Euler's method in standard floating-point arithmetic applied to modelling of biological systems.

    Science.gov (United States)

    Kalinina, Elizabeth A

    2013-08-01

    The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. New calibration method for I-scan sensors to enable the precise measurement of pressures delivered by 'pressure garments'.

    Science.gov (United States)

    Macintyre, Lisa

    2011-11-01

    Accurate measurement of the pressure delivered by medical compression products is highly desirable both in monitoring treatment and in developing new pressure inducing garments or products. There are several complications in measuring pressure at the garment/body interface and at present no ideal pressure measurement tool exists for this purpose. This paper summarises a thorough evaluation of the accuracy and reproducibility of measurements taken following both of Tekscan Inc.'s recommended calibration procedures for I-scan sensors; and presents an improved method for calibrating and using I-scan pressure sensors. The proposed calibration method enables accurate (±2.1 mmHg) measurement of pressures delivered by pressure garments to body parts with a circumference ≥30 cm. This method is too cumbersome for routine clinical use but is very useful, accurate and reproducible for product development or clinical evaluation purposes. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.

  19. Precise Positioning Method for Logistics Tracking Systems Using Personal Handy-Phone System Based on Mahalanobis Distance

    Science.gov (United States)

    Yokoi, Naoaki; Kawahara, Yasuhiro; Hosaka, Hiroshi; Sakata, Kenji

    Focusing on the Personal Handy-phone System (PHS) positioning service used in physical distribution logistics, a positioning error offset method for improving positioning accuracy is invented. A disadvantage of PHS positioning is that measurement errors caused by the fluctuation of radio waves due to buildings around the terminal are large, ranging from several tens to several hundreds of meters. In this study, an error offset method is developed, which learns patterns of positioning results (latitude and longitude) containing errors and the highest signal strength at major logistic points in advance, and matches them with new data measured in actual distribution processes according to the Mahalanobis distance. Then the matching resolution is improved to 1/40 that of the conventional error offset method.

  20. Elimination of chloride ions in the analytical method for the precise determination of plutonium or uranium using titanous ions as reductant

    International Nuclear Information System (INIS)

    Nicol-Rostaing, C.; Wagner, J.F.

    1991-01-01

    The Corpel and Regnaud's procedure for the precise determination of uranium and plutonium, using titanous (III) chloride as reductant has been modified in order to be compatible with the throwing out standards in nuclear plants. The removal of chloride reagents has been studied. On the original method, there are two: titanous chloride and ferric chloride. We propose titanous sulphate and ferric nitrate as substitution reagents. As commercial titanous sulphate can't be found, an easy procedure has been set and described with storage conditions: experimental conditions have been optimized and adapted for manufacturing on a laboratory scale [fr

  1. The energy calibration and precision of a gamma spectrometry unit - Method using the electron annihilation energy as the only standard

    International Nuclear Information System (INIS)

    Hoclet, Michel

    1971-06-01

    Spectrometry using Ge(Li) detectors is discussed. The excellent resolution of this type of detector, the mathematical analysis of the spectral lines of the pulses, and the reproducibility of the spectrometer enable highly accurate measurements of the abscises (some 10 -5 ) corresponding to the peaks. A method using the annihilation energy of the electron as the only standard was developed. The method is applied to the measurement of the gamma ray energies of the radioelements: 22 Na, 24 Na, 56 Mn, 56 Co, 59 Fe, 72 Ga, 88 Y, 122 Sb, 124 Sb and 137 Cs. (author) [fr

  2. Multi-GNSS high-rate RTK, PPP and novel direct phase observation processing method: application to precise dynamic displacement detection

    Science.gov (United States)

    Paziewski, Jacek; Sieradzki, Rafal; Baryla, Radoslaw

    2018-03-01

    This paper provides the methodology and performance assessment of multi-GNSS signal processing for the detection of small-scale high-rate dynamic displacements. For this purpose, we used methods of relative (RTK) and absolute positioning (PPP), and a novel direct signal processing approach. The first two methods are recognized as providing accurate information on position in many navigation and surveying applications. The latter is an innovative method for dynamic displacement determination with the use of GNSS phase signal processing. This method is based on the developed functional model with parametrized epoch-wise topocentric relative coordinates derived from filtered GNSS observations. Current regular kinematic PPP positioning, as well as medium/long range RTK, may not offer coordinate estimates with subcentimeter precision. Thus, extended processing strategies of absolute and relative GNSS positioning have been developed and applied for displacement detection. The study also aimed to comparatively analyze the developed methods as well as to analyze the impact of combined GPS and BDS processing and the dependence of the results of the relative methods on the baseline length. All the methods were implemented with in-house developed software allowing for high-rate precise GNSS positioning and signal processing. The phase and pseudorange observations collected with a rate of 50 Hz during the field test served as the experiment’s data set. The displacements at the rover station were triggered in the horizontal plane using a device which was designed and constructed to ensure a periodic motion of GNSS antenna with an amplitude of ~3 cm and a frequency of ~4.5 Hz. Finally, a medium range RTK, PPP, and direct phase observation processing method demonstrated the capability of providing reliable and consistent results with the precision of the determined dynamic displacements at the millimeter level. Specifically, the research shows that the standard deviation of

  3. Impact of PET/CT system, reconstruction protocol, data analysis method, and repositioning on PET/CT precision: An experimental evaluation using an oncology and brain phantom.

    Science.gov (United States)

    Mansor, Syahir; Pfaehler, Elisabeth; Heijtel, Dennis; Lodge, Martin A; Boellaard, Ronald; Yaqub, Maqsood

    2017-12-01

    In longitudinal oncological and brain PET/CT studies, it is important to understand the repeatability of quantitative PET metrics in order to assess change in tracer uptake. The present studies were performed in order to assess precision as function of PET/CT system, reconstruction protocol, analysis method, scan duration (or image noise), and repositioning in the field of view. Multiple (repeated) scans have been performed using a NEMA image quality (IQ) phantom and a 3D Hoffman brain phantom filled with 18 F solutions on two systems. Studies were performed with and without randomly (PET/CT, especially in the case of smaller spheres (PET metrics depends on the combination of reconstruction protocol, data analysis methods and scan duration (scan statistics). Moreover, precision was also affected by phantom repositioning but its impact depended on the data analysis method in combination with the reconstructed voxel size (tissue fraction effect). This study suggests that for oncological PET studies the use of SUV peak may be preferred over SUV max because SUV peak is less sensitive to patient repositioning/tumor sampling. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. Precise material identification method based on a photon counting technique with correction of the beam hardening effect in X-ray spectra

    International Nuclear Information System (INIS)

    Kimoto, Natsumi; Hayashi, Hiroaki; Asahara, Takashi; Mihara, Yoshiki; Kanazawa, Yuki; Yamakawa, Tsutomu; Yamamoto, Shuichiro; Yamasaki, Masashi; Okada, Masahiro

    2017-01-01

    The aim of our study is to develop a novel material identification method based on a photon counting technique, in which the incident and penetrating X-ray spectra are analyzed. Dividing a 40 kV X-ray spectra into two energy regions, the corresponding linear attenuation coefficients are derived. We can identify the materials precisely using the relationship between atomic number and linear attenuation coefficient through the correction of the beam hardening effect of the X-ray spectra. - Highlights: • We propose a precise material identification method to be used as a photon counting system. • Beam hardening correction is important, even when the analysis is applied to the short energy regions in the X-ray spectrum. • Experiments using a single probe-type CdTe detector were performed, and Monte Carlo simulation was also carried out. • We described the applicability of our method for clinical diagnostic X-ray imaging in the near future.

  5. Ontology-based coupled optimisation design method using state-space analysis for the spindle box system of large ultra-precision optical grinding machine

    Science.gov (United States)

    Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian

    2017-08-01

    With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.

  6. Innovative methods to study human intestinal drug metabolism in vitro : Precision-cut slices compared with Ussing chamber preparations

    NARCIS (Netherlands)

    van de Kerkhof, Esther G.; Ungell, Anna-Lena B.; Sjoberg, Asa K.; de Jager, Marina H.; Hilgendorf, Constanze; de Graaf, Inge A. M.; Groothuis, Geny M. M.

    2006-01-01

    Predictive in vitro methods to investigate drug metabolism in the human intestine using intact tissue are of high importance. Therefore, we studied the metabolic activity of human small intestinal and colon slices and compared it with the metabolic activity of the same human intestinal segments

  7. Effects of diurnal emission patterns and sampling frequency on precision of measurement methods for daily ammonia emissions from animal houses

    NARCIS (Netherlands)

    Estelles, F.; Calvet, S.; Ogink, N.W.M.

    2010-01-01

    Ammonia concentrations and airflow rates are the main parameters needed to determine ammonia emissions from animal houses. It is possible to classify their measurement methods into two main groups according to the sampling frequency: semi-continuous and daily average measurements. In the first

  8. Precision genome editing

    DEFF Research Database (Denmark)

    Steentoft, Catharina; Bennett, Eric P; Schjoldager, Katrine Ter-Borch Gram

    2014-01-01

    Precise and stable gene editing in mammalian cell lines has until recently been hampered by the lack of efficient targeting methods. While different gene silencing strategies have had tremendous impact on many biological fields, they have generally not been applied with wide success in the field...... of glycobiology, primarily due to their low efficiencies, with resultant failure to impose substantial phenotypic consequences upon the final glycosylation products. Here, we review novel nuclease-based precision genome editing techniques enabling efficient and stable gene editing, including gene disruption...... by introducing single or double-stranded breaks at a defined genomic sequence. We here compare and contrast the different techniques and summarize their current applications, highlighting cases from the field of glycobiology as well as pointing to future opportunities. The emerging potential of precision gene...

  9. HIGH-PRECISION ATTITUDE ESTIMATION METHOD OF STAR SENSORS AND GYRO BASED ON COMPLEMENTARY FILTER AND UNSCENTED KALMAN FILTER

    Directory of Open Access Journals (Sweden)

    C. Guo

    2017-07-01

    Full Text Available Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite’s attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF and Unscented Kalman Filter (UKF. In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.

  10. A new method for precise determination of iron, zinc and cadmium stable isotope ratios in seawater by double-spike mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Conway, Tim M., E-mail: conway.tm@gmail.com [Department of Earth and Ocean Sciences, University of South Carolina, Columbia, SC 29208 (United States); Rosenberg, Angela D. [Department of Earth and Ocean Sciences, University of South Carolina, Columbia, SC 29208 (United States); Adkins, Jess F. [California Institute of Technology, Division of Geological and Planetary Sciences, Pasadena, CA 91125 (United States); John, Seth G. [Department of Earth and Ocean Sciences, University of South Carolina, Columbia, SC 29208 (United States)

    2013-09-02

    Graphical abstract: ‘Metal-free’ seawater doped with varying concentrations of ‘zero’ isotope standards, processed through our simultaneous method, and then analyzed by double spike MC-ICPMS for Fe, Zn and Cd isotope ratios. All values were determined within 2 σ error (error bars shown) of zero. -- Highlights: •The first simultaneous method for isotopic analysis of Fe, Zn and Cd in seawater. •Designed for 1 L samples, a 1–20 fold improvement over previous methods. •Low blanks and high precision allow measurement of low concentration samples. •Small volume and fast processing are ideal for high-resolution large-scale studies. •Will facilitate investigation of marine trace-metal isotope cycling. -- Abstract: The study of Fe, Zn and Cd stable isotopes (δ{sup 56}Fe, δ{sup 66}Zn and δ{sup 114}Cd) in seawater is a new field, which promises to elucidate the marine cycling of these bioactive trace metals. However, the analytical challenges posed by the low concentration of these metals in seawater has meant that previous studies have typically required large sample volumes, highly limiting data collection in the oceans. Here, we present the first simultaneous method for the determination of these three isotope systems in seawater, using Nobias PA-1 chelating resin to extract metals from seawater, purification by anion exchange chromatography, and analysis by double spike MC-ICPMS. This method is designed for use on only a single litre of seawater and has blanks of 0.3, 0.06 and <0.03 ng for Fe, Zn and Cd respectively, representing a 1–20 fold reduction in sample size and a 4–130 decrease in blank compared to previously reported methods. The procedure yields data with high precision for all three elements (typically 0.02–0.2‰; 1σ internal precision), allowing us to distinguish natural variability in the oceans, which spans 1–3‰ for all three isotope systems. Simultaneous extraction and purification of three metals makes this method ideal

  11. A new method for precise determination of iron, zinc and cadmium stable isotope ratios in seawater by double-spike mass spectrometry

    International Nuclear Information System (INIS)

    Conway, Tim M.; Rosenberg, Angela D.; Adkins, Jess F.; John, Seth G.

    2013-01-01

    Graphical abstract: ‘Metal-free’ seawater doped with varying concentrations of ‘zero’ isotope standards, processed through our simultaneous method, and then analyzed by double spike MC-ICPMS for Fe, Zn and Cd isotope ratios. All values were determined within 2 σ error (error bars shown) of zero. -- Highlights: •The first simultaneous method for isotopic analysis of Fe, Zn and Cd in seawater. •Designed for 1 L samples, a 1–20 fold improvement over previous methods. •Low blanks and high precision allow measurement of low concentration samples. •Small volume and fast processing are ideal for high-resolution large-scale studies. •Will facilitate investigation of marine trace-metal isotope cycling. -- Abstract: The study of Fe, Zn and Cd stable isotopes (δ 56 Fe, δ 66 Zn and δ 114 Cd) in seawater is a new field, which promises to elucidate the marine cycling of these bioactive trace metals. However, the analytical challenges posed by the low concentration of these metals in seawater has meant that previous studies have typically required large sample volumes, highly limiting data collection in the oceans. Here, we present the first simultaneous method for the determination of these three isotope systems in seawater, using Nobias PA-1 chelating resin to extract metals from seawater, purification by anion exchange chromatography, and analysis by double spike MC-ICPMS. This method is designed for use on only a single litre of seawater and has blanks of 0.3, 0.06 and <0.03 ng for Fe, Zn and Cd respectively, representing a 1–20 fold reduction in sample size and a 4–130 decrease in blank compared to previously reported methods. The procedure yields data with high precision for all three elements (typically 0.02–0.2‰; 1σ internal precision), allowing us to distinguish natural variability in the oceans, which spans 1–3‰ for all three isotope systems. Simultaneous extraction and purification of three metals makes this method ideal for high

  12. A Simple Time Domain Collocation Method to Precisely Search for the Periodic Orbits of Satellite Relative Motion

    Directory of Open Access Journals (Sweden)

    Xiaokui Yue

    2014-01-01

    Full Text Available A numerical approach for obtaining periodic orbits of satellite relative motion is proposed, based on using the time domain collocation (TDC method to search for the periodic solutions of an exact J2 nonlinear relative model. The initial conditions for periodic relative orbits of the Clohessy-Wiltshire (C-W equations or Tschauner-Hempel (T-H equations can be refined with this approach to generate nearly bounded orbits. With these orbits, a method based on the least-squares principle is then proposed to generate projected closed orbit (PCO, which is a reference for the relative motion control. Numerical simulations reveal that the presented TDC searching scheme is effective and simple, and the projected closed orbit is very fuel saving.

  13. High precision micro-scale Hall Effect characterization method using in-line micro four-point probes

    DEFF Research Database (Denmark)

    Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong

    2008-01-01

    Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...

  14. Study on Maritime Logistics Warehousing Center Model and Precision Marketing Strategy Optimization Based on Fuzzy Method and Neural Network Model

    OpenAIRE

    Xiao Kefeng; Hu Xiaolan

    2017-01-01

    The bulk commodity, different with the retail goods, has a uniqueness in the location selection, the chosen of transportation program and the decision objectives. How to make optimal decisions in the facility location, requirement distribution, shipping methods and the route selection and establish an effective distribution system to reduce the cost has become a burning issue for the e-commerce logistics, which is worthy to be deeply and systematically solved. In this paper, Logistics warehou...

  15. A simple but precise method for quantitative measurement of the quality of the laser focus in a scanning optical microscope.

    Science.gov (United States)

    Trägårdh, J; Macrae, K; Travis, C; Amor, R; Norris, G; Wilson, S H; Oppo, G-L; McConnell, G

    2015-07-01

    We report a method for characterizing the focussing laser beam exiting the objective in a laser scanning microscope. This method provides the size of the optical focus, the divergence of the beam, the ellipticity and the astigmatism. We use a microscopic-scale knife edge in the form of a simple transmission electron microscopy grid attached to a glass microscope slide, and a light-collecting optical fibre and photodiode underneath the specimen. By scanning the laser spot from a reflective to a transmitting part of the grid, a beam profile in the form of an error function can be obtained and by repeating this with the knife edge at different axial positions relative to the beam waist, the divergence and astigmatism of the postobjective laser beam can be obtained. The measured divergence can be used to quantify how much of the full numerical aperture of the lens is used in practice. We present data of the beam radius, beam divergence, ellipticity and astigmatism obtained with low (0.15, 0.7) and high (1.3) numerical aperture lenses and lasers commonly used in confocal and multiphoton laser scanning microscopy. Our knife-edge method has several advantages over alternative knife-edge methods used in microscopy including that the knife edge is easy to prepare, that the beam can be characterized also directly under a cover slip, as necessary to reduce spherical aberrations for objectives designed to be used with a cover slip, and it is suitable for use with commercial laser scanning microscopes where access to the laser beam can be limited. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  16. Precise Composition Tailoring of Mixed-Cation Hybrid Perovskites for Efficient Solar Cells by Mixture Design Methods.

    Science.gov (United States)

    Li, Liang; Liu, Na; Xu, Ziqi; Chen, Qi; Wang, Xindong; Zhou, Huanping

    2017-09-26

    Mixed anion/cation perovskites absorber has been recently implemented to construct highly efficient single junction solar cells and tandem devices. However, considerable efforts are still required to map the composition-property relationship of the mixed perovskites absorber, which is essential to facilitate device design. Here we report the intensive exploration of mixed-cation perovskites in their compositional space with the assistance of a rational mixture design (MD) methods. Different from the previous linear search of the cation ratios, it is found that by employing the MD methods, the ternary composition can be tuned simultaneously following simplex lattice designs or simplex-centroid designs, which enable significantly reduced experiment/sampling size to unveil the composition-property relationship for mixed perovskite materials and to boost the resultant device efficiency. We illustrated the composition-property relationship of the mixed perovskites in multidimension and achieved an optimized power conversion efficiency of 20.99% in the corresponding device. Moreover, the method is demonstrated to be feasible to help adjust the bandgap through rational materials design, which can be further extended to other materials systems, not limited in polycrystalline perovskites films for photovoltaic applications only.

  17. A Precise Method for Processing Data to Determine the Dissociation Constants of Polyhydroxy Carboxylic Acids via Potentiometric Titration.

    Science.gov (United States)

    Huang, Kaixuan; Xu, Yong; Lu, Wen; Yu, Shiyuan

    2017-12-01

    The thermodynamic dissociation constants of xylonic acid and gluconic acid were studied via potentiometric methods, and the results were verified using lactic acid, which has a known pKa value, as a model compound. Solutions of xylonic acid and gluconic acid were titrated with a standard solution of sodium hydroxide. The determined pKa data were processed via the method of derivative plots using computer software, and the accuracy was validated using the Gran method. The dissociation constants associated with the carboxylic acid group of xylonic and gluconic acids were determined to be pKa 1  = 3.56 ± 0.07 and pKa 1  = 3.74 ± 0.06, respectively. Further, the experimental data showed that the second deprotonation constants associated with a hydroxyl group of each of the two acids were pKa 2  = 8.58 ± 0.12 and pKa 2  = 7.06 ± 0.08, respectively. The deprotonation behavior of polyhydroxy carboxylic acids was altered using various ratios with Cu(II) to form complexes in solution, and this led to proposing a hypothesis for further study.

  18. Extending the precision and efficiency of the all-electron full-potential linearized augmented plane-wave density-functional theory method

    International Nuclear Information System (INIS)

    Michalicek, Gregor

    2015-01-01

    Density functional theory (DFT) is the most widely-used first-principles theory for analyzing, describing and predicting the properties of solids based on the fundamental laws of quantum mechanics. The success of the theory is a consequence of powerful approximations to the unknown exchange and correlation energy of the interacting electrons and of sophisticated electronic structure methods that enable the computation of the density functional equations on a computer. A widely used electronic structure method is the full-potential linearized augmented plane-wave (FLAPW) method, that is considered to be one of the most precise methods of its kind and often referred to as a standard. Challenged by the demand of treating chemically and structurally increasingly more complex solids, in this thesis this method is revisited and extended along two different directions: (i) precision and (ii) efficiency. In the full-potential linearized augmented plane-wave method the space of a solid is partitioned into nearly touching spheres, centered at each atom, and the remaining interstitial region between the spheres. The Kohn-Sham orbitals, which are used to construct the electron density, the essential quantity in DFT, are expanded into a linearized augmented plane-wave basis, which consists of plane waves in the interstitial region and angular momentum dependent radial functions in the spheres. In this thesis it is shown that for certain types of materials, e.g., materials with very broad electron bands or large band gaps, or materials that allow the usage of large space-filling spheres, the variational freedom of the basis in the spheres has to be extended in order to represent the Kohn-Sham orbitals with high precision over a large energy spread. Two kinds of additional radial functions confined to the spheres, so-called local orbitals, are evaluated and found to successfully eliminate this error. A new efficient basis set is developed, named linearized augmented lattice

  19. Development of a Method to Isolate Glutamic Acid from Foodstuffs for a Precise Determination of Their Stable Carbon Isotope Ratio.

    Science.gov (United States)

    Kobayashi, Kazuhiro; Tanaka, Masaharu; Yatsukawa, Yoichi; Tanabe, Soichi; Tanaka, Mitsuru; Ohkouchi, Naohiko

    2018-01-01

    Recent growing health awareness is leading to increasingly conscious decisions by consumers regarding the production and traceability of food. Stable isotopic compositions provide useful information for tracing the origin of foodstuffs and processes of food production. Plants exhibit different ratios of stable carbon isotopes (δ 13 C) because they utilized different photosynthetic (carbon fixation) pathways and grow in various environments. The origins of glutamic acid in foodstuffs can be differentiated on the basis of these photosynthetic characteristics. Here, we have developed a method to isolate glutamic acid in foodstuffs for determining the δ 13 C value by elemental analyzer-isotope-ratio mass spectrometry (EA/IRMS) without unintended isotopic fractionation. Briefly, following acid-hydrolysis, samples were defatted and passed through activated carbon and a cation-exchange column. Then, glutamic acid was isolated using preparative HPLC. This method is applicable to measuring, with a low standard deviation, the δ 13 C values of glutamic acid from foodstuffs derived from C3 and C4 plants and marine algae.

  20. High Precision Seawater Sr/Ca Measurements in the Florida Keys by Inductively Coupled Plasma Atomic Emission Spectrometry: Analytical Method and Implications for Coral Paleothermometry

    Science.gov (United States)

    Khare, A.; Kilbourne, K. H.; Schijf, J.

    2017-12-01

    Standard methods of reconstructing past sea surface temperatures (SSTs) with coral skeletal Sr/Ca ratios assume the seawater Sr/Ca ratio is constant. However, there is little data to support this assumption, in part because analytical techniques capable of determining seawater Sr/Ca with sufficient accuracy and precision are expensive and time consuming. We demonstrate a method to measure seawater Sr/Ca using inductively coupled plasma atomic emission spectrometry where we employ an intensity ratio calibration routine that reduces the self- matrix effects of calcium and cancels out the matrix effects that are common to both calcium and strontium. A seawater standard solution cross-calibrated with multiple instruments is used to correct for long-term instrument drift and any remnant matrix effects. The resulting method produces accurate seawater Sr/Ca determinations rapidly, inexpensively, and with a precision better than 0.2%. This method will make it easier for coral paleoclimatologists to quantify potentially problematic fluctuations in seawater Sr/Ca at their study locations. We apply our method to test for variability in surface seawater Sr/Ca along the Florida Keys Reef Tract. We are collecting winter and summer samples for two years in a grid with eleven nearshore to offshore transects across the reef, as well as continuous samples collected by osmotic pumps at four locations adjacent to our grid. Our initial analysis of the grid samples indicates a trend of decreasing Sr/Ca values offshore potentially due to a decreasing groundwater influence. The values differ by as much as 0.05 mmol/mol which could lead to an error of 1°C in mean SST reconstructions. Future work involves continued sampling in the Florida Keys to test for seasonal and interannual variability in seawater Sr/Ca, as well as collecting data from small reefs in the Virgin Islands to test the stability of seawater Sr/Ca under different geologic, hydrologic and hydrographic environments.

  1. Comparison of precise orbit determination methods of zero-difference kinematic, dynamic and reduced-dynamic of GRACE-A satellite using SHORDE software

    Science.gov (United States)

    Li, Kai; Zhou, Xuhua; Guo, Nannan; Zhao, Gang; Xu, Kexin; Lei, Weiwei

    2017-09-01

    Zero-difference kinematic, dynamic and reduced-dynamic precise orbit determination (POD) are three methods to obtain the precise orbits of Low Earth Orbit satellites (LEOs) by using the on-board GPS observations. Comparing the differences between those methods have great significance to establish the mathematical model and is usefull for us to select a suitable method to determine the orbit of the satellite. Based on the zero-difference GPS carrier-phase measurements, Shanghai Astronomical Observatory (SHAO) has improved the early version of SHORDE and then developed it as an integrated software system, which can perform the POD of LEOs by using the above three methods. In order to introduce the function of the software, we take the Gravity Recovery And Climate Experiment (GRACE) on-board GPS observations in January 2008 as example, then we compute the corresponding orbits of GRACE by using the SHORDE software. In order to evaluate the accuracy, we compare the orbits with the precise orbits provided by Jet Propulsion Laboratory (JPL). The results show that: (1) If we use the dynamic POD method, and the force models are used to represent the non-conservative forces, the average accuracy of the GRACE orbit is 2.40cm, 3.91cm, 2.34cm and 5.17cm in radial (R), along-track (T), cross-track (N) and 3D directions respectively; If we use the accelerometer observation instead of non-conservative perturbation model, the average accuracy of the orbit is 1.82cm, 2.51cm, 3.48cm and 4.68cm in R, T, N and 3D directions respectively. The result shows that if we use accelerometer observation instead of the non-conservative perturbation model, the accuracy of orbit is better. (2) When we use the reduced-dynamic POD method to get the orbits, the average accuracy of the orbit is 0.80cm, 1.36cm, 2.38cm and 2.87cm in R, T, N and 3D directions respectively. This method is carried out by setting up the pseudo-stochastic pulses to absorb the errors of atmospheric drag and other

  2. Precise magnetostatic field using the finite element method; Calculo de campos magnetostaticos com precisao utilizando o metodo dos elementos finitos

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Francisco Rogerio Teixeira do

    2013-07-01

    The main objective of this work is to simulate electromagnetic fields using the Finite Element Method. Even in the easiest case of electrostatic and magnetostatic numerical simulation some problems appear when the nodal finite element is used. It is difficult to model vector fields with scalar functions mainly in non-homogeneous materials. With the aim to solve these problems two types of techniques are tried: the adaptive remeshing using nodal elements and the edge finite element that ensure the continuity of tangential components. Some numerical analysis of simple electromagnetic problems with homogeneous and non-homogeneous materials are performed using first, the adaptive remeshing based in various error indicators and second, the numerical solution of waveguides using edge finite element. (author)

  3. New Insights of High-precision Asteroseismology: Acoustic Radius and χ2-matching Method for Solar-like Oscillator KIC 6225718

    Directory of Open Access Journals (Sweden)

    Wu Tao

    2017-01-01

    parameters. In the present work, we adopt the χ2-minimization method but only use the observed high-precision seismic observations (i.e., oscillation frequencies to constrain theoretical models for analyzing solar-like oscillator KIC 6225718. Finally, we find the acoustic radius τ0 is the only global parameter that can be accurately measured by the χ2-matching method between observed frequencies and theoretical model calculations for a pure p-mode oscillation star. We obtain τ0=4601.5−8.3+4.4 seconds for KIC 6225718. It leads that the mass and radius of the CMMs are degenerate with each other. In addition, we find that the distribution range of acoustic radius is slightly enlarged by some extreme cases, which posses both a larger mass and a higher (or lower metal abundance, at the lower acoustic radius end.

  4. New Insights of High-precision Asteroseismology: Acoustic Radius and χ2-matching Method for Solar-like Oscillator KIC 6225718

    Science.gov (United States)

    Wu, Tao; Li, Yan

    2017-10-01

    Asteroseismology is a powerful tool for probing stellar interiors and determining stellar fundamental parameters. In the present work, we adopt the χ2-minimization method but only use the observed high-precision seismic observations (i.e., oscillation frequencies) to constrain theoretical models for analyzing solar-like oscillator KIC 6225718. Finally, we find the acoustic radius τ0 is the only global parameter that can be accurately measured by the χ2-matching method between observed frequencies and theoretical model calculations for a pure p-mode oscillation star. We obtain seconds for KIC 6225718. It leads that the mass and radius of the CMMs are degenerate with each other. In addition, we find that the distribution range of acoustic radius is slightly enlarged by some extreme cases, which posses both a larger mass and a higher (or lower) metal abundance, at the lower acoustic radius end.

  5. SpineAnalyzer™ is an accurate and precise method of vertebral fracture detection and classification on dual-energy lateral vertebral assessment scans

    International Nuclear Information System (INIS)

    Birch, C.; Knapp, K.; Hopkins, S.; Gallimore, S.; Rock, B.

    2015-01-01

    Osteoporotic fractures of the spine are associated with significant morbidity, are highly predictive of hip fractures, but frequently do not present clinically. When there is a low to moderate clinical suspicion of vertebral fracture, which would not justify acquisition of a radiograph, vertebral fracture assessment (VFA) using Dual-energy X-ray Absorptiometry (DXA) offers a low-dose opportunity for diagnosis. Different approaches to the classification of vertebral fractures have been documented. The aim of this study was to measure the precision and accuracy of SpineAnalyzer™, a quantitative morphometry software program. Lateral vertebral assessment images of 64 men were analysed using SpineAnalyzer™ and standard GE Lunar software. The images were also analysed by two expert readers using a semi-quantitative approach. Agreement between groups ranged from 95.99% to 98.60%. The intra-rater precision for the application of SpineAnalyzer™ to vertebrae was poor in the upper thoracic regions, but good elsewhere. SpineAnalyzer™ is a reproducible and accurate method for measuring vertebral height and quantifying vertebral fractures from VFA scans. - Highlights: • Vertebral fracture assessment (VFA) using Dual-energy X-ray Absorptiometry (DXA) offers a low-dose opportunity for diagnosis. • Agreement between VFA software (SpineAnalyzer™) and expert readers is high. • Intra-rater precision of SpineAnalyzer™ applied to upper thoracic vertebrae is poor, but good elsewhere. • SpineAnalyzer™ is reproducible and accurate for vertebral height measurement and fracture quantification from VFA scans

  6. A developed wedge fixtures assisted high precision TEM samples pre-thinning method: Towards the batch lamella preparation

    Directory of Open Access Journals (Sweden)

    Dandan Wang

    2017-04-01

    Full Text Available Ion milling, wedge cutting or polishing, and focused ion beam (FIB milling are widely-used techniques for the transmission electron microscope (TEM sample preparation. Especially, the FIB milling provides a site-specific analysis, deposition, and ablation of materials in the micrometer and nanometer scale. However, the cost of FIB tools has been always a significant concern. Since it is inevitable to use the FIB technique, the improvement of efficiency is a key point. Traditional TEM sample preparation with FIB was routinely implemented on a single sample each time. Aiming at cost efficiency, a new pre-thinning technique for batch sample preparation was developed in this paper. The present proposal combines the sample preparation techniques with multi-samples thinning, cross-section scanning electron microscopy (SEM, wedge cutting, FIB and other sample pre-thinning techniques. The new pre-thinning technique is to prepare an edge TEM sample on a grinding and polishing fixture with a slant surface. The thickness of the wedges sample can be measured to 1∼2 μm under optical microscope. Therefore, this fixture is superior to the traditional optical method of estimating the membrane thickness. Moreover, by utilizing a multi-sample holding fixture, more samples can be pre-thinned simultaneously, which significantly improved the productivity of TEM sample preparation.

  7. [Precision nutrition in the era of precision medicine].

    Science.gov (United States)

    Chen, P Z; Wang, H

    2016-12-06

    Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.

  8. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    Science.gov (United States)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  9. Acute influence of the application of strength treatment based on the combinated contrast training method on precision and velocity in overarm handball throwing

    Directory of Open Access Journals (Sweden)

    Juan S. Gómez Navarrete

    2011-01-01

    Full Text Available Abstract Combination of strengh training methods has been shown as an effective way for strengh development. This is specially indicated for improving explosive strengh and power. Our study shows the influence of combined contrast improvement method on overarm throwing in handball. Treatment consisted on one session of combined contras method. 10 handball palyers and 13 non-players participated in this estudy. The instrumental was a gun radar to know velocity throws, a camera to digitalize the accuracy, and an isometric dynamometer for strenght data collection. Results show a significant decrease in peak of force values in players group. Another significant decrease was obsesrved on integral to peak force for both groups. There are significant positive relations between throwing velocity parameters related to weight and size with isometric peak of force. We concluded that isometric time/strengh curve is an usefull instrument to observe changes produced in the subjet's capacity of producing strengh during training. Keywords: Precision, velocity, overarm handball throwing, isometric test, combined contrast method

  10. Systems Biology Methods for Alzheimer's Disease Research Toward Molecular Signatures, Subtypes, and Stages and Precision Medicine: Application in Cohort Studies and Trials.

    Science.gov (United States)

    Castrillo, Juan I; Lista, Simone; Hampel, Harald; Ritchie, Craig W

    2018-01-01

    Alzheimer's disease (AD) is a complex multifactorial disease, involving a combination of genomic, interactome, and environmental factors, with essential participation of (a) intrinsic genomic susceptibility and (b) a constant dynamic interplay between impaired pathways and central homeostatic networks of nerve cells. The proper investigation of the complexity of AD requires new holistic systems-level approaches, at both the experimental and computational level. Systems biology methods offer the potential to unveil new fundamental insights, basic mechanisms, and networks and their interplay. These may lead to the characterization of mechanism-based molecular signatures, and AD hallmarks at the earliest molecular and cellular levels (and beyond), for characterization of AD subtypes and stages, toward targeted interventions according to the evolving precision medicine paradigm. In this work, an update on advanced systems biology methods and strategies for holistic studies of multifactorial diseases-particularly AD-is presented. This includes next-generation genomics, neuroimaging and multi-omics methods, experimental and computational approaches, relevant disease models, and latest genome editing and single-cell technologies. Their progressive incorporation into basic research, cohort studies, and trials is beginning to provide novel insights into AD essential mechanisms, molecular signatures, and markers toward mechanism-based classification and staging, and tailored interventions. Selected methods which can be applied in cohort studies and trials, with the European Prevention of Alzheimer's Dementia (EPAD) project as a reference example, are presented and discussed.

  11. Development and Validation of a Precise, Single HPLC Method for the Determination of Tolperisone Impurities in API and Pharmaceutical Dosage Forms.

    Science.gov (United States)

    Raju, Thummala Veera Raghava; Seshadri, Raja Kumar; Arutla, Srinivas; Mohan, Tharlapu Satya Sankarsana Jagan; Rao, Ivaturi Mrutyunjaya; Nittala, Someswara Rao

    2013-01-01

    A novel, sensitive, stability-indicating HPLC method has been developed for the quantitative estimation of Tolperisone-related impurities in both bulk drugs and pharmaceutical dosage forms. Effective chromatographic separation was achieved on a C18 stationary phase with a simple mobile phase combination delivered in a simple gradient programme, and quantitation was by ultraviolet detection at 254 nm. The mobile phase consisted of a buffer and acetonitrile delivered at a flow rate 1.0 ml/min. The buffer consisted of 0.01 M potassium dihydrogen phosphate with the pH adjusted to 8.0 by using diethylamine. In the developed HPLC method, the resolution between Tolperisone and its four potential impurities was found to be greater than 2.0. Regression analysis showed an R value (correlation coefficient) of greater than 0.999 for the Tolperisone impurities. This method was capable of detecting all four impurities of Tolperisone at a level of 0.19 μg/mL with respect to the test concentration of 1000 μg/mL for a 10 µl injection volume. The tablets were subjected to the stress conditions of hydrolysis, oxidation, photolysis, and thermal degradation. Considerable degradation was found to occur in base hydrolysis, water hydrolysis, and oxidation. The stress samples were assayed against a qualified reference standard and the mass balance was found to be close to 100%. The established method was validated and found to be linear, accurate, precise, specific, robust, and rugged.

  12. Precision of glucose measurements in control sera by isotope dilution/mass spectrometry: proposed definitive method compared with a reference method

    International Nuclear Information System (INIS)

    Pelletier, O.; Arratoon, C.

    1987-01-01

    This improved isotope-dilution gas chromatographic/mass spectrometric (GC/MS) method, in which [ 13 C]glucose is the internal standard, meets the requirements of a Definitive Method. In a first study with five reconstituted lyophilized sera, a nested analysis of variance of GC/MS values indicated considerable among-vial variation. The CV for 32 measurements per serum ranged from 0.5 to 0.9%. However, concentration and uncertainty values (mmol/L per gram of serum) assigned to one serum by the NBS Definitive Method (7.56 +/- 0.28) were practically identical to those obtained with the proposed method (7.57 +/- 0.20). In the second study, we used twice more [ 13 C]glucose diluent to assay four serum pools and two lyophilized sera. The CV ranged from 0.26 to 0.5% for the serum pools and from 0.28 to 0.59% for the lyophilized sera. In comparison, results by the hexokinase/glucose-6-phosphate dehydrogenase reference method agreed within acceptable limits with those by the Definitive Method but tended to be slightly higher (up to 3%) for lyophilized serum samples or slightly lower (up to 2.5%) for serum pools

  13. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Science.gov (United States)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  14. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Energy Technology Data Exchange (ETDEWEB)

    He Bin [Division of Nuclear Medicine, Department of Radiology, New York Presbyterian Hospital-Weill Medical College of Cornell University, New York, NY 10021 (United States); Frey, Eric C, E-mail: bih2006@med.cornell.ed, E-mail: efrey1@jhmi.ed [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21287-0859 (United States)

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed {sup 111}In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations

  15. Methods of In-Process On-Machine Auto-Inspection of Dimensional Error and Auto-Compensation of Tool Wear for Precision Turning

    Directory of Open Access Journals (Sweden)

    Shih-Ming Wang

    2016-04-01

    Full Text Available The purpose of this study is mainly to develop an information and communication technology (ICT-based intelligent dimension inspection and tool wear compensation method for precision tuning. With the use of vibration signal processing/characteristics analysis technology combined with ICT, statistical analysis, and diagnosis algorithms, the method can be used to proceed with an on-line dimension inspection and on-machine tool wear auto-compensation for the turning process. Meanwhile, the method can also monitor critical tool life to identify the appropriate time for cutter replacement to reduce machining costs and improve the production efficiency of the turning process. Compared to the traditional ways, the method offers the advantages of requiring less manpower, and having better production efficiency, high tool life, fewer scrap parts, and low costs for inspection instruments. Algorithms and diagnosis threshold values for the detection, cutter wear compensation, and cutter life monitoring were developed. In addition, a bilateral communication module utilizing FANUC Open CNC (computer numerical control Application Programming Interface (API Spec was developed for the on-line extraction of instant NC (numerical control codes for monitoring and transmit commands to CNC controllers for cutter wear compensation. With use of local area networks (LAN to deliver the detection and correction information, the proposed method was able to remotely control the on-machine monitoring process and upload the machining and inspection data to a remote central platform for further production optimization. The verification experiments were conducted on a turning production line. The results showed that the system provided 93% correction for size inspection and 100% correction for cutter wear compensation.

  16. A High-Precision RF Time-of-Flight Measurement Method based on Vernier Effect for Localization of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sang-il KO

    2011-12-01

    Full Text Available This paper presents the fundamental principles of a high-precision RF time-of-flight (ToF measurement method based on the vernier effect, which enables the improvement of time measurement resolution, for accurate distance measurement between sensor nodes in wireless sensor networks. Similar to the two scales of the vernier caliper, two heterogeneous clocks are employed to induce a new virtual time resolution that is much finer than clocks’ intrinsic time resolution. Consecutive RF signal transmission and sensing using two heterogeneous clocks generates a unique sensing pattern for the RF ToF, so that the size of the RF ToF can be estimated by comparing the measured sensing pattern with the predetermined sensing patterns for the RF ToF. RF ToF measurement experiments using this heterogeneous clock system, which has low operating frequencies of several megahertz, certify the proposed RF ToF measurement method through the evaluation of the measured sensing patterns with respect to an RF round-trip time of several nanoseconds.

  17. Application of the FW-CADIS variance reduction method to calculate a precise N-flux distribution for the FRJ-2 research reactor

    International Nuclear Information System (INIS)

    Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J.

    2014-01-01

    For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.

  18. Application of the FW-CADIS variance reduction method to calculate a precise N-flux distribution for the FRJ-2 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Abbasi, F.; Nabbi, R.; Thomauske, B.; Ulrich, J. [RWTH Aachen Univ. (Germany). Inst. of Nuclear Engineering and Technology

    2014-11-15

    For the decommissioning of nuclear facilities, activity and dose rate atlases (ADAs) are required to create and manage a decommissioning plan and optimize the radiation protection measures. By the example of the research reactor FRJ-2, a detailed MCNP model for Monte-Carlo neutron and radiation transport calculations based on a full scale outer core CAD-model was generated. To cope with the inadequacies of the MCNP code for the simulation of a large and complex system like FRJ-2, the FW-CADIS method was embedded in the MCNP simulation runs to optimise particle sampling and weighting. The MAVRIC sequence of the SCALE6 program package, capable of generating importance maps, was applied for this purpose. The application resulted in a significant increase in efficiency and performance of the whole simulation method and in optimised utilization of the computer resources. As a result, the distribution of the neutron flux in the entire reactor structures - as a basis for the generation of the detailed activity atlas - was produced with a low level of variance and a high level of spatial, numerical and statistical precision.

  19. A Monte Carlo Simulation Comparing the Statistical Precision of Two High-Stakes Teacher Evaluation Methods: A Value-Added Model and a Composite Measure

    Science.gov (United States)

    Spencer, Bryden

    2016-01-01

    Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…

  20. Analysis of the accuracy and precision of the McMaster method in detection of the eggs of Toxocara and Trichuris species (Nematoda) in dog faeces.

    Science.gov (United States)

    Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew

    2013-07-01

    The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification.

  1. Development of cell-based quantitative evaluation method for cell cycle-arrest type cancer drugs for apoptosis by high precision surface plasmon resonance sensor

    Science.gov (United States)

    Ona, Toshihiro; Nishijima, Hiroshi; Kosaihira, Atsushi; Shibata, Junko

    2008-04-01

    In vitro rapid and quantitative cell-based assay is demanded to verify the efficacy prediction of cancer drugs since a cancer patient may have unconventional aspects of tumor development. Here, we show the rapid and non-label quantitative verifying method and instrumentation of apoptosis for cell cycle-arrest type cancer drugs (Roscovitine and D-allose) by reaction analysis of living liver cancer cells cultured on a sensor chip with a newly developed high precision (50 ndeg s -1 average fluctuation) surface plasmon resonance (SPR) sensor. The time-course cell reaction as the SPR angle change rate for 10 min from 30 min cell culture with a drug was significantly related to cell viability. By the simultaneous detection of differential SPR angle change and fluorescence by specific probes using the new instrument, the SPR angle was related to the nano-order potential decrease in inner mitochondrial membrane potential. The results obtained are universally valid for the cell cycle-arrest type cancer drugs, which mediate apoptosis through different cell-signaling pathways, by a liver cancer cell line of Hep G2 (P<0.001). This system towards the application to evaluate personal therapeutic potentials of drugs using cancer cells from patients in clinical use.

  2. Development of a highly precise ID-ICP-SFMS method for analysis of low concentrations of lead in rice flour reference materials.

    Science.gov (United States)

    Zhu, Yanbei; Inagaki, Kazumi; Yarita, Takashi; Chiba, Koichi

    2008-07-01

    Microwave digestion and isotope dilution inductively coupled plasma mass spectrometry (ID-ICP-SFMS) has been applied to the determination of Pb in rice flour. In order to achieve highly precise determination of low concentrations of Pb, the digestion blank for Pb was reduced to 0.21 ng g(-1) after optimization of the digestion conditions, in which 20 mL analysis solution was obtained after digestion of 0.5 g rice flour. The observed value of Pb in a non-fat milk powder certified reference material (CRM), NIST SRM 1549, was 16.8 +/- 0.8 ng g(-1) (mean +/- expanded uncertainty, k = 2; n = 5), which agreed with the certified value of 19 +/- 3 ng g(-1) and indicated the effectiveness of the method. Analytical results for Pb in three brown rice flour CRMs, NIST SRM 1568a, NIES CRM 10-a, and NIES CRM 10-b, were 7.32 +/- 0.24 ng g(-1) (n = 5), 1010 +/- 10 ng g(-1) (n = 5), and 1250 +/- 20 ng g(-1) (n = 5), respectively. The concentration of Pb in a candidate white rice flour reference material (RM) sample prepared by the National Metrology Institute of Japan (NMIJ) was observed to be 4.36 +/- 0.28 ng g(-1) (n = 10 bottles).

  3. High-precision drop shape analysis on inclining flat surfaces: introduction and comparison of this special method with commercial contact angle analysis.

    Science.gov (United States)

    Schmitt, Michael; Heib, Florian

    2013-10-07

    Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in

  4. Quantitative real-time RT-PCR and chromogenic in situ hybridization: precise methods to detect HER-2 status in breast carcinoma

    Directory of Open Access Journals (Sweden)

    Soares Fernando A

    2009-03-01

    Full Text Available Abstract Background HER-2 gene testing has become an integral part of breast cancer patient diagnosis. The most commonly used assay in the clinical setting for evaluating HER-2 status is immunohistochemistry (IHC and fluorescence in situ hybridization (FISH. These procedures permit correlation between HER-2 expression and morphological features. However, FISH signals are labile and fade over time, making post-revision of the tumor difficult. CISH (chromogenic in situ hybridization is an alternative procedure, with certain advantages, although still limited as a diagnostic tool in breast carcinomas. Methods To elucidate the molecular profile of HER-2 status, mRNA and protein expression in 75 invasive breast carcinomas were analyzed by real time quantitative RT-PCR (qRT-PCR and IHC, respectively. Amplifications were evaluated in 43 of these cases by CISH and in 11 by FISH. Results The concordance rate between IHC and qRT-PCR results was 78.9%, and 94.6% for qRT-PCR and CISH. Intratumoral heterogeneity of HER-2 status was identified in three cases by CISH. The results of the three procedures were compared and showed a concordance rate of 83.8%; higher discordances were observed in 0 or 1+ immunostaining cases, which showed high-level amplification (15.4% and HER-2 transcript overexpression (20%. Moreover, 2+ immunostaining cases presented nonamplified status (50% by CISH and HER-2 downexpression (38.5% by qRT-PCR. In general, concordance occurred between qRT-PCR and CISH results. A high concordance was observed between CISH/qRT-PCR and FISH. Comparisons with clinicopathological data revealed a significant association between HER-2 downexpression and the involvement of less than four lymph nodes (P = 0.0350. Conclusion Based on these findings, qRT-PCR was more precise and reproducible than IHC. Furthermore, CISH was revealed as an alternative and useful procedure for investigating amplifications involving the HER-2 gene.

  5. Precision of a photogrammetric method to perform 3D wound measurements compared to standard 2D photographic techniques in the horse.

    Science.gov (United States)

    Labens, R; Blikslager, A

    2013-01-01

    Methods of 3D wound imaging in man play an important role in monitoring of healing and determination of the prognosis. Standard photographic assessments in equine wound management consist of 2D analyses, which provide little quantitative information on the wound bed. 3D imaging of equine wounds is feasible using principles of stereophotogrammetry. 3D measurements differ significantly and are more precise than results with standard 2D assessments. Repeated specialised photographic imaging of 4 clinical wounds left to heal by second intention was performed. The intraoperator variability in measurements due to imaging and 3D processing was compared to that of a standard 2D technique using descriptive statistics and multivariate repeated measures ANOVA. Using a custom made imaging system, 3D analyses were successfully performed. Area and circumference measurements were significantly different between imaging modalities. The intraoperator variability of 3D measurements was up to 2.8 times less than that of 2D results. On average, the maximum discrepancy between repeated measurements was 5.8% of the mean for 3D and 17.3% of the mean for 2D assessments. The intraoperator repeatability of 3D wound measurements based on principles of stereophotogrammetry is significantly increased compared to that of a standard 2D photographic technique indicating it may be a useful diagnostic and monitoring tool. The equine granulation bed plays an important role in equine wound healing. When compared to 2D analyses 3D monitoring of the equine wound bed allows superior quantitative characterisation, contributing to clinical and experimental investigations by offering potential new parameters. © 2012 EVJ Ltd.

  6. FROM PERSONALIZED TO PRECISION MEDICINE

    Directory of Open Access Journals (Sweden)

    K. V. Raskina

    2017-01-01

    Full Text Available The need to maintain a high quality of life against a backdrop of its inevitably increasing duration is one of the main problems of modern health care. The concept of "right drug to the right patient at the right time", which at first was bearing the name "personalized", is currently unanimously approved by international scientific community as "precision medicine". Precision medicine takes all the individual characteristics into account: genes diversity, environment, lifestyles, and even bacterial microflora and also involves the use of the latest technological developments, which serves to ensure that each patient gets assistance fitting his state best. In the United States, Canada and France national precision medicine programs have already been submitted and implemented. The aim of this review is to describe the dynamic integration of precision medicine methods into routine medical practice and life of modern society. The new paradigm prospects description are complemented by figures, proving the already achieved success in the application of precise methods for example, the targeted therapy of cancer. All in all, the presence of real-life examples, proving the regularity of transition to a new paradigm, and a wide range  of technical and diagnostic capabilities available and constantly evolving make the all-round transition to precision medicine almost inevitable.

  7. Development of a Method to Assess the Precision Of the z-axis X-ray Beam Collimation in a CT Scanner

    Science.gov (United States)

    Kim, Yon-Min

    2018-05-01

    Generally X-ray equipment specifies the beam collimator for the accuracy measurement as a quality control item, but the computed tomography (CT) scanner with high dose has no collimator accuracy measurement item. If the radiation dose is to be reduced, an important step is to check if the beam precisely collimates at the body part for CT scan. However, few ways are available to assess how precisely the X-ray beam is collimated. In this regard, this paper provides a way to assess the precision of z-axis X-ray beam collimation in a CT scanner. After the image plate cassette had been exposed to the X-ray beam, the exposed width was automatically detected by using a computer program developed by the research team to calculate the difference between the exposed width and the imaged width (at isocenter). The result for the precision of z-axis X-ray beam collimation showed that the exposed width was 3.8 mm and the overexposure was high at 304% when a narrow beam of a 1.25 mm imaged width was used. In this study, the precision of the beam collimation of the CT scanner, which is frequently used for medical services, was measured in a convenient way by using the image plate (IP) cassette.

  8. Precision muon physics

    Science.gov (United States)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  9. Technology: cancer treatment: breath control set radiotherapy free. Two new methods allow to aim the tumors with precision without suffering of respiratory move

    International Nuclear Information System (INIS)

    Blanc, S.

    2004-01-01

    The challenge of radiotherapy consists in improving the ratio between the destruction of tumor cells and the preservation of sane cells. The efficiency of the treatment depends on the precision of radiations impact on the tumor but this one is difficult to get because the patient respiration makes the target mobile. It is now possible to get this precision. It is a question to block the patient respiration or to register the movements and then establish the shooting window of radiations in function of the tumor optimum exposure. (N.C.)

  10. Study of the precision and trueness of the Brazilian method for ethanol and gasoline determination; Estudo da precisao e exatidao do metodo brasileiro para determinacao de etanol e gasolina

    Energy Technology Data Exchange (ETDEWEB)

    Zucchini, Ricardo R.; Hinata, Patricia [Instituto de Pesquisas Tecnologicas (IPT), Sao Paulo, SP (Brazil); Gioseffi, Carla S.; Franco, Joao B.S. [Instituto Brasileiro de Petroleo, Gas e Biocombustiveis (IBP), Rio de Janeiro, RJ (Brazil); Nascimento, Cristina R.; Torres, Eduardo S. [Agencia Nacional do Petroleo, Gas Natural e Biocombustiveis (ANP), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    The determination of repeatability and reproducibility standard deviations of an analytical method, s{sub r} and s{sub R}, obtained by Interlaboratory program, makes it possible to calculate many kinds of precision limits of the method, which are needed in every laboratory's routine result comparisons and also in between-laboratories comparisons. This paper presents the results of the first interlaboratory trial, accomplished in the Brazilian petroleum sector, performed to define the trueness and precision of the Brazilian standard method for the determination of fuel anhydrous ethylic alcohol content in gasoline, that was performed by 34 experienced laboratories. The r and R values were 0,7 and 2,3 and main factors that would improve and optimize the method are presented. (author)

  11. The newest precision measurement

    International Nuclear Information System (INIS)

    Lee, Jing Gu; Lee, Jong Dae

    1974-05-01

    This book introduces basic of precision measurement, measurement of length, limit gauge, measurement of angles, measurement of surface roughness, measurement of shapes and locations, measurement of outline, measurement of external and internal thread, gear testing, accuracy inspection of machine tools, three dimension coordinate measuring machine, digitalisation of precision measurement, automation of precision measurement, measurement of cutting tools, measurement using laser, and point of choosing length measuring instrument.

  12. Practical precision measurement

    International Nuclear Information System (INIS)

    Kwak, Ho Chan; Lee, Hui Jun

    1999-01-01

    This book introduces basic knowledge of precision measurement, measurement of length, precision measurement of minor diameter, measurement of angles, measurement of surface roughness, three dimensional measurement, measurement of locations and shapes, measurement of screw, gear testing, cutting tools testing, rolling bearing testing, and measurement of digitalisation. It covers height gauge, how to test surface roughness, measurement of plan and straightness, external and internal thread testing, gear tooth measurement, milling cutter, tab, rotation precision measurement, and optical transducer.

  13. [Precision and personalized medicine].

    Science.gov (United States)

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  14. Precision Clock Evaluation Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Tests and evaluates high-precision atomic clocks for spacecraft, ground, and mobile applications. Supports performance evaluation, environmental testing,...

  15. Recent improvement of a FIB-SEM serial-sectioning method for precise 3D image reconstruction - application of the orthogonally-arranged FIB-SEM.

    Science.gov (United States)

    Hara, Toru

    2014-11-01

    IntroductionWe installed the first "orthogonally-arranged" FIB-SEM in 2011. The most characteristic point of this instrument is that the FIB and SEM columns are perpendicularly mounted; this is specially designed to obtain a serial-sectioning dataset more accurately and precisely with higher contrast and higher spatial resolution compare to other current FIB-SEMs [1]. Since the installation in 2011, we have developed the hardware and methodology of the serial-sectioning based on this orthogonal FIB-SEM. In order to develop this technique, we have widely opened this instrument to every researcher of all fields. In the presentation, I would like to introduce some of application results that are obtained by users of this instrument. The characteristic points of the orthogonal systemFigure 1 shows a difference between the standard and the orthogonal FIB-SEM systems: In the standard system, shown in Fig.1(a), optical axes of a FIB and a SEM crosses around 60deg., while in the orthogonal system (Fig.1(b)), they are perpendicular to each other. The standard arrangement (a) is certainly suitable for TEM lamellae preparation etc. because the FIB and the SEM can see the same position simultaneously. However, for a serial-sectioning, it is not to say the best arrangement. One of the reasons is that the sliced plane by the FIB is not perpendicular to the electron beam so that the background contrast is not uniform and observed plane is distorted. On the other hand, in case of the orthogonally-arranged system,(b), these problems are resolved. In addition, spatial resolution can keep high enough even in a low accelerating voltage (e.g. 500V) because a working distance is set very small, 2mm. From these special design, we can obtain the serial-sectioning dataset from rather wide area (∼100um) with high spatial resolution (Max. 2×2×2nm). As this system has many kinds of detectors: SE, ET, Backscatter Electron(Energy-selective), EDS, EBSD, STEM(BF&ADF), with Ar+ ion-gun and a

  16. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds; Metodo de titulacao potenciometrica de alta precisao semi-automatizado para a caracterizacao de compostos de uranio

    Energy Technology Data Exchange (ETDEWEB)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da, E-mail: barbara@ird.gov.b, E-mail: fabio@ird.gov.b, E-mail: pedrodio@ird.gov.b, E-mail: radier@ird.gov.b, E-mail: delgado@ird.gov.b, E-mail: wanderley@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lopes, Ricardo T., E-mail: ricardo@lin.ufrj.b [Universidade Federal do Rio de Janeiro (LIN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Lab. de Instrumentacao Nuclear

    2011-10-26

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  17. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration: In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    Science.gov (United States)

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SD(SE)): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SD(SE): 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice.

  18. Precision machining commercialization

    International Nuclear Information System (INIS)

    1978-01-01

    To accelerate precision machining development so as to realize more of the potential savings within the next few years of known Department of Defense (DOD) part procurement, the Air Force Materials Laboratory (AFML) is sponsoring the Precision Machining Commercialization Project (PMC). PMC is part of the Tri-Service Precision Machine Tool Program of the DOD Manufacturing Technology Five-Year Plan. The technical resources supporting PMC are provided under sponsorship of the Department of Energy (DOE). The goal of PMC is to minimize precision machining development time and cost risk for interested vendors. PMC will do this by making available the high precision machining technology as developed in two DOE contractor facilities, the Lawrence Livermore Laboratory of the University of California and the Union Carbide Corporation, Nuclear Division, Y-12 Plant, at Oak Ridge, Tennessee

  19. submitter LEP precision results

    CERN Document Server

    Kawamoto, T

    2001-01-01

    Precision measurements at LEP are reviewed, with main focus on the electroweak measurements and tests of the Standard Model. Constraints placed by the LEP measurements on possible new physics are also discussed.

  20. Description of precision colorimeter

    OpenAIRE

    Campos Acosta, Joaquín; Pons Aglio, Alicia; Corróns, Antonio

    1987-01-01

    Describes the use of a fully automatic, computer-controlled absolute spectroradiometer as a precision colorimeter. The chromaticity coordinates of several types of light sources have been obtained with this measurement system.

  1. NCI Precision Medicine

    Science.gov (United States)

    This illustration represents the National Cancer Institute’s support of research to improve precision medicine in cancer treatment, in which unique therapies treat an individual’s cancer based on specific genetic abnormalities of that person’s tumor.

  2. Laser precision microfabrication

    CERN Document Server

    Sugioka, Koji; Pique, Alberto

    2010-01-01

    Miniaturization and high precision are rapidly becoming a requirement for many industrial processes and products. As a result, there is greater interest in the use of laser microfabrication technology to achieve these goals. This book composed of 16 chapters covers all the topics of laser precision processing from fundamental aspects to industrial applications to both inorganic and biological materials. It reviews the sate of the art of research and technological development in the area of laser processing.

  3. Environment-assisted precision measurement

    DEFF Research Database (Denmark)

    Goldstein, G.; Cappellaro, P.; Maze, J. R.

    2011-01-01

    We describe a method to enhance the sensitivity of precision measurements that takes advantage of the environment of a quantum sensor to amplify the response of the sensor to weak external perturbations. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which...... are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas that are coupled strongly to the sensor qubit; it does not depend on the exact values of the coupling strengths and is resilient to many forms of decoherence. The method...... achieves nearly Heisenberg-limited precision measurement, using a novel class of entangled states. We discuss specific applications to improve clock sensitivity using trapped ions and magnetic sensing based on electronic spins in diamond...

  4. Comments on ''The optimization of electronic precision in ultrasonic velocity measurements: A comparison of the time interval averaging and sing around methods'' [J. Acoust. Soc. Am. 73, 1833--1837 (1983)

    International Nuclear Information System (INIS)

    Karplus, H.B.

    1984-01-01

    J. D. Aindow and R. C. Chivers [J. Acoust. Soc. Am. 73, 1833 (1983)] compared the precision of the direct ''time-of-flight'' technique with the ''sing-around'' method for sound velocity measurement. Their conclusion is changed by the newer, faster, commercial clocks (2 ns HP5345<0.1 ns HP5370), giving the advantage to the time of flight method. The analysis is herewith augmented by calculating the time jitter in terms of signal to noise ratio, which was correctly shown to be negligible with 100-ns clocks, but becomes increasingly more significant with faster clocks

  5. Accuracy of a method based on atomic absorption spectrometry to determine inorganic arsenic in food: Outcome of the collaborative trial IMEP-41.

    Science.gov (United States)

    Fiamegkos, I; Cordeiro, F; Robouch, P; Vélez, D; Devesa, V; Raber, G; Sloth, J J; Rasmussen, R R; Llorente-Mirandes, T; Lopez-Sanchez, J F; Rubio, R; Cubadda, F; D'Amato, M; Feldmann, J; Raab, A; Emteborg, H; de la Calle, M B

    2016-12-15

    A collaborative trial was conducted to determine the performance characteristics of an analytical method for the quantification of inorganic arsenic (iAs) in food. The method is based on (i) solubilisation of the protein matrix with concentrated hydrochloric acid to denature proteins and allow the release of all arsenic species into solution, and (ii) subsequent extraction of the inorganic arsenic present in the acid medium using chloroform followed by back-extraction to acidic medium. The final detection and quantification is done by flow injection hydride generation atomic absorption spectrometry (FI-HG-AAS). The seven test items used in this exercise were reference materials covering a broad range of matrices: mussels, cabbage, seaweed (hijiki), fish protein, rice, wheat, mushrooms, with concentrations ranging from 0.074 to 7.55mgkg(-1). The relative standard deviation for repeatability (RSDr) ranged from 4.1 to 10.3%, while the relative standard deviation for reproducibility (RSDR) ranged from 6.1 to 22.8%. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Mixed-Precision Spectral Deferred Correction: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  7. Study to improve the precision of calculation of split renal clearance by gamma camera method using 99mTc-MAG3

    International Nuclear Information System (INIS)

    Mimura, Hiroaki; Tomomitsu, Tatsushi; Yanagimoto, Shinichi

    1999-01-01

    Both fundamental and clinical studies were performed to improve the precision with which split renal clearance is calculated from the relation between renal clearance and the total renal uptake rate by using 99m Tc-MAG 3 , which is mainly excreted into the proximal renal tubules. In the fundamental study, the most suitable kidney phantom threshold values for the extracted renal outline were investigated with regard to size, radioactivity, depth of the kidney phantom, and radioactivity in the background. In the clinical study, suitable timing to obtain additional images for making the ROI and the standard point for calculation of renal uptake rate were investigated. The results indicated that, although suitable threshold values were distributed from 25% to 45%, differences in size, solution activity, and the position of the phantom or BG activity did not have significant effects. Comparing 1-3 min with 2-5 min as the time for additional images for ROI, we found that renal areas using the former time showed higher values, and the correlation coefficient of the regression formula improved significantly. Comparison of the timing for the start of data acquisition with the end of the arterial phase as a standard point of calculating renal uptake rate showed improvement in the latter. (author)

  8. Precise small-angle X-ray scattering evaluation of the pore structures in track-etched membranes: Comparison with other convenient evaluation methods

    Energy Technology Data Exchange (ETDEWEB)

    Miyazaki, Tsukasa, E-mail: t_miyazaki@cross.or.jp [Neutron Science and Technology Center, Comprehensive Research Organization for Science and Society, 162-1, Shirakata, Tokai-mura, Naka-gun, Ibaraki 319-1106 (Japan); Takenaka, Mikihito [Department of Polymer Chemistry, Gradual School of Engineering, Kyoto University, Kyotodaigaku-katsura, Kyoto 615-8510 (Japan)

    2017-03-01

    Poly(ethylene terephthalate) (PET)-based track-etched membranes (TMs) with pore sizes ranging from few nanometers to approximately 1 μm are used in various applications in the biological field, and their pore structures are determined by small-angle X-ray scattering (SAXS). These TMs with the nanometer-sized cylindrical pores aligned parallel to the film thickness direction are produced by chemical etching of the track in the PET films irradiated by heavy ions with the sodium hydroxide aqueous solution. It is well known that SAXS allows us to precisely and statistically estimate the pore size and the pore size distribution in the TMs by using the form factor of a cylinder with the extremely long pore length relative to the pore diameter. The results obtained were compared with those estimated with scanning electron microscopy and gas permeability measurements. The result showed that the gas permeability measurement is convenient to evaluate the pore size of TMs within a wide length scale, and the SEM observation is also suited to estimate the pore size, although SEM observation is usually limited above approximately 30 nm.

  9. Validation of Cristallini Sampling Method for UF6 by High Precision Double-Spike Measurements Collaboration between JRC-G.2, Team METRO and SGAS/IAEA

    OpenAIRE

    RICHTER Stephan; HIESS Joe; JAKOBSSON Ulf

    2016-01-01

    The so-called "Cristallini Method" for sampling of UF6 by adsorption and hydrolysis in alumina pellets inside a fluorothene P-10 tube has been developed by the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC) several years ago [1]. This method has several advantages compared to the currently used sampling method, for which UF6 is distilled into a stainless steel tube for transportation, with hydrolysis and isotopic analysis being performed after shipping to t...

  10. Precision Experiments at LEP

    CERN Document Server

    de Boer, Wim

    2015-01-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Hig...

  11. Precision muonium spectroscopy

    International Nuclear Information System (INIS)

    Jungmann, Klaus P.

    2016-01-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s–2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium–antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter. (author)

  12. Precision electron polarimetry

    International Nuclear Information System (INIS)

    Chudakov, E.

    2013-01-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry

  13. A passion for precision

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit

    2006-01-01

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.

  14. Improving Precision of Types

    DEFF Research Database (Denmark)

    Winther, Johnni

    Types in programming languages provide a powerful tool for the programmer to document the code so that a large aspect of the intent can not only be presented to fellow programmers but also be checked automatically by compilers. The precision with which types model the behavior of programs...... is crucial to the quality of these automated checks, and in this thesis we present three different improvements to the precision of types in three different aspects of the Java programming language. First we show how to extend the type system in Java with a new type which enables the detection of unintended...

  15. Quantitative real-time RT-PCR and chromogenic in situ hybridization: precise methods to detect HER-2 status in breast carcinoma

    International Nuclear Information System (INIS)

    Rosa, Fabíola E; Silveira, Sara M; Silveira, Cássia GT; Bérgamo, Nádia A; Neto, Francisco A Moraes; Domingues, Maria AC; Soares, Fernando A; Caldeira, José RF; Rogatto, Silvia R

    2009-01-01

    HER-2 gene testing has become an integral part of breast cancer patient diagnosis. The most commonly used assay in the clinical setting for evaluating HER-2 status is immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH). These procedures permit correlation between HER-2 expression and morphological features. However, FISH signals are labile and fade over time, making post-revision of the tumor difficult. CISH (chromogenic in situ hybridization) is an alternative procedure, with certain advantages, although still limited as a diagnostic tool in breast carcinomas. To elucidate the molecular profile of HER-2 status, mRNA and protein expression in 75 invasive breast carcinomas were analyzed by real time quantitative RT-PCR (qRT-PCR) and IHC, respectively. Amplifications were evaluated in 43 of these cases by CISH and in 11 by FISH. The concordance rate between IHC and qRT-PCR results was 78.9%, and 94.6% for qRT-PCR and CISH. Intratumoral heterogeneity of HER-2 status was identified in three cases by CISH. The results of the three procedures were compared and showed a concordance rate of 83.8%; higher discordances were observed in 0 or 1+ immunostaining cases, which showed high-level amplification (15.4%) and HER-2 transcript overexpression (20%). Moreover, 2+ immunostaining cases presented nonamplified status (50%) by CISH and HER-2 downexpression (38.5%) by qRT-PCR. In general, concordance occurred between qRT-PCR and CISH results. A high concordance was observed between CISH/qRT-PCR and FISH. Comparisons with clinicopathological data revealed a significant association between HER-2 downexpression and the involvement of less than four lymph nodes (P = 0.0350). Based on these findings, qRT-PCR was more precise and reproducible than IHC. Furthermore, CISH was revealed as an alternative and useful procedure for investigating amplifications involving the HER-2 gene

  16. Precision physics at LHC

    International Nuclear Information System (INIS)

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered

  17. Precision Muonium Spectroscopy

    NARCIS (Netherlands)

    Jungmann, Klaus P.

    2016-01-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 mu s. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In

  18. What is precision medicine?

    Science.gov (United States)

    König, Inke R; Fuchs, Oliver; Hansen, Gesine; von Mutius, Erika; Kopp, Matthias V

    2017-10-01

    The term "precision medicine" has become very popular over recent years, fuelled by scientific as well as political perspectives. Despite its popularity, its exact meaning, and how it is different from other popular terms such as "stratified medicine", "targeted therapy" or "deep phenotyping" remains unclear. Commonly applied definitions focus on the stratification of patients, sometimes referred to as a novel taxonomy, and this is derived using large-scale data including clinical, lifestyle, genetic and further biomarker information, thus going beyond the classical "signs-and-symptoms" approach.While these aspects are relevant, this description leaves open a number of questions. For example, when does precision medicine begin? In which way does the stratification of patients translate into better healthcare? And can precision medicine be viewed as the end-point of a novel stratification of patients, as implied, or is it rather a greater whole?To clarify this, the aim of this paper is to provide a more comprehensive definition that focuses on precision medicine as a process. It will be shown that this proposed framework incorporates the derivation of novel taxonomies and their role in healthcare as part of the cycle, but also covers related terms. Copyright ©ERS 2017.

  19. Precise Calculation of Complex Radioactive Decay Chains

    National Research Council Canada - National Science Library

    Harr, Logan J

    2007-01-01

    ...). An application of the exponential moments function is used with a transmutation matrix in the calculation of complex radioactive decay chains to achieve greater precision than can be attained through current methods...

  20. Bias and precision of methods for estimating the difference in restricted mean survival time from an individual patient data meta-analysis

    Directory of Open Access Journals (Sweden)

    Béranger Lueza

    2016-03-01

    Full Text Available Abstract Background The difference in restricted mean survival time ( rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ , the area between two survival curves up to time horizon t ∗ $$ {t}^{\\ast } $$ , is often used in cost-effectiveness analyses to estimate the treatment effect in randomized controlled trials. A challenge in individual patient data (IPD meta-analyses is to account for the trial effect. We aimed at comparing different methods to estimate the rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ from an IPD meta-analysis. Methods We compared four methods: the area between Kaplan-Meier curves (experimental vs. control arm ignoring the trial effect (Naïve Kaplan-Meier; the area between Peto curves computed at quintiles of event times (Peto-quintile; the weighted average of the areas between either trial-specific Kaplan-Meier curves (Pooled Kaplan-Meier or trial-specific exponential curves (Pooled Exponential. In a simulation study, we varied the between-trial heterogeneity for the baseline hazard and for the treatment effect (possibly correlated, the overall treatment effect, the time horizon t ∗ $$ {t}^{\\ast } $$ , the number of trials and of patients, the use of fixed or DerSimonian-Laird random effects model, and the proportionality of hazards. We compared the methods in terms of bias, empirical and average standard errors. We used IPD from the Meta-Analysis of Chemotherapy in Nasopharynx Carcinoma (MAC-NPC and its updated version MAC-NPC2 for illustration that included respectively 1,975 and 5,028 patients in 11 and 23 comparisons. Results The Naïve Kaplan-Meier method was unbiased, whereas the Pooled Exponential and, to a much lesser extent, the Pooled Kaplan-Meier methods showed a bias with non-proportional hazards. The Peto-quintile method underestimated the rmstD t ∗ $$ rmstD\\left({t}^{\\ast}\\right $$ , except with non-proportional hazards at t ∗ $$ {t}^{\\ast } $$ = 5 years. In the presence of treatment effect

  1. Combining within and between instrument information to estimate precision

    International Nuclear Information System (INIS)

    Jost, J.W.; Devary, J.L.; Ward, J.E.

    1980-01-01

    When two instruments, both having replicated measurements, are used to measure the same set of items, between instrument information may be used to augment the within instrument precision estimate. A method is presented which combines the within and between instrument information to obtain an unbiased and minimum variance estimate of instrument precision. The method does not assume the instruments have equal precision

  2. Precision medicine for psychopharmacology: a general introduction.

    Science.gov (United States)

    Shin, Cheolmin; Han, Changsu; Pae, Chi-Un; Patkar, Ashwin A

    2016-07-01

    Precision medicine is an emerging medical model that can provide accurate diagnoses and tailored therapeutic strategies for patients based on data pertaining to genes, microbiomes, environment, family history and lifestyle. Here, we provide basic information about precision medicine and newly introduced concepts, such as the precision medicine ecosystem and big data processing, and omics technologies including pharmacogenomics, pharamacometabolomics, pharmacoproteomics, pharmacoepigenomics, connectomics and exposomics. The authors review the current state of omics in psychiatry and the future direction of psychopharmacology as it moves towards precision medicine. Expert commentary: Advances in precision medicine have been facilitated by achievements in multiple fields, including large-scale biological databases, powerful methods for characterizing patients (such as genomics, proteomics, metabolomics, diverse cellular assays, and even social networks and mobile health technologies), and computer-based tools for analyzing large amounts of data.

  3. Development and Validation of a Precise, Single HPLC Method for the Determination of Tolperisone Impurities in API and Pharmaceutical Dosage Forms

    OpenAIRE

    Raju, Thummala Veera Raghava; Seshadri, Raja Kumar; Arutla, Srinivas; Mohan, Tharlapu Satya Sankarsana Jagan; Rao, Ivaturi Mrutyunjaya; Nittala, Someswara Rao

    2012-01-01

    A novel, sensitive, stability-indicating HPLC method has been developed for the quantitative estimation of Tolperisone-related impurities in both bulk drugs and pharmaceutical dosage forms. Effective chromatographic separation was achieved on a C18 stationary phase with a simple mobile phase combination delivered in a simple gradient programme, and quantitation was by ultraviolet detection at 254 nm. The mobile phase consisted of a buffer and acetonitrile delivered at a flow rate 1.0 ml/min. ...

  4. Precision synchrotron radiation detectors

    International Nuclear Information System (INIS)

    Levi, M.; Rouse, F.; Butler, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab

  5. A passion for precision

    CERN Multimedia

    CERN. Geneva

    2006-01-01

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  6. Quad precision delay generator

    International Nuclear Information System (INIS)

    Krishnan, Shanti; Gopalakrishnan, K.R.; Marballi, K.R.

    1997-01-01

    A Quad Precision Delay Generator delays a digital edge by a programmed amount of time, varying from nanoseconds to microseconds. The output of this generator has an amplitude of the order of tens of volts and rise time of the order of nanoseconds. This was specifically designed and developed to meet the stringent requirements of the plasma focus experiments. Plasma focus is a laboratory device for producing and studying nuclear fusion reactions in hot deuterium plasma. 3 figs

  7. Innovation and optimization of a method of pump-probe polarimetry with pulsed laser beams in view of a precise measurement of parity violation in atomic cesium

    International Nuclear Information System (INIS)

    Chauvat, D.

    1997-10-01

    While Parity Violation (PV) experiments on highly forbidden transitions have been using detection of fluorescence signals; our experiment uses a pump-probe scheme to detect the PV signal directly on a transmitted probe beam. A pulsed laser beam of linear polarisation ε 1 excites the atoms on the 6S-7S cesium transition in a colinear electric field E || k(ex). The probe beam (k(pr) || k(ex)) of linear polarisation ε 2 tuned to the transition 7S-6P(3/2) is amplified. The small asymmetry (∼ 10 -6 ) in the gain that depends on the handedness of the tri-hedron (E, ε 1 , ε 2 ) is the manifestation of the PV effect. This is measured as an E-odd apparent rotation of the plane of polarization of the probe beam; using balanced mode polarimetry. New criteria of selection have been devised, that allow us to distinguish the true PV-signal against fake rotations due to electromagnetic interferences, geometrical effects, polarization imperfections, or stray transverse electric and magnetic fields. These selection criteria exploit the symmetry of the PV-rotation - linear dichroism - and the revolution symmetry of the experiment. Using these criteria it is not only possible to reject fake signals, but also to elucidate the underlying physical mechanisms and to measure the relevant defects of the apparatus. The present signal-to-noise ratio allows embarking in PV measurements to reach the 10% statistical accuracy. A 1% measurement still requires improvements. Two methods have been demonstrated. The first one exploits the amplification of the asymmetry at high gain - one major advantage provided by our detection method based on stimulated emission. The second method uses both a much higher incident intensity and a special dichroic component which magnifies tiny polarization rotations. (author)

  8. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  9. A new method for the precise absolute calibration of polarization effects in spin-1/2-spin-0 scattering applied to p-. alpha. scattering at 25. 68 MeV and. theta. sub lab =117. 5 sup 0

    Energy Technology Data Exchange (ETDEWEB)

    Clajus, M.; Egun, P.; Grueebler, W.; Hautle, P.; Weber, A. (Eidgenoessische Technische Hochschule, Zurich (Switzerland). Inst. fuer Mittelenergiephysik); Schmelzbach, P.A. (Paul Scherrer Inst., Villigen (Switzerland)); Kretschmer, W.; Haller, M.; Prenzel, C.J.; Rauscher, A.; Schuster, W.; Weidmann, R. (Erlangen-Nuernberg Univ., Erlangen (Germany, F.R.). Physikalisches Inst.)

    1989-08-20

    A new general method for the precise calibration of beam polarization or analyzing power in spin-1/2-spin-0 elastic scattering has been developed. This absolute calibration method uses the double scattering technique in connection with modern polarized ion source technology. It is based on an incident beam with at least two different polarization states and its independent of beam energy and scattering angle. The application to p-{alpha} elastic scattering at 25.68 MeV and a lab. angle of 117.5{sup 0} is described. The result is a new determination of the analyzing power to an accuracy of better than 1%, i.e. A{sub y}=0.8119+-0.0076. Systematic errors are extensively discussed. (orig.).

  10. Precision Fit of Screw-Retained Implant-Supported Fixed Dental Prostheses Fabricated by CAD/CAM, Copy-Milling, and Conventional Methods.

    Science.gov (United States)

    de França, Danilo Gonzaga; Morais, Maria Helena; das Neves, Flávio D; Carreiro, Adriana Fonte; Barbosa, Gustavo As

    The aim of this study was to evaluate the effectiveness of fabrication methods (computer-aided design/computer-aided manufacture [CAD/CAM], copy-milling, and conventional casting) in the fit accuracy of three-unit, screw-retained fixed dental prostheses. Sixteen three-unit implant-supported screw-retained frameworks were fabricated to fit an in vitro model. Eight frameworks were fabricated using the CAD/CAM system, four in zirconia and four in cobalt-chromium. Four zirconia frameworks were fabricated using the copy-milled system, and four were cast in cobalt-chromium using conventional casting with premachined abutments. The vertical and horizontal misfit at the implant-framework interface was measured using scanning electron microscopy at ×250. The results for vertical misfit were analyzed using Kruskal-Wallis and Mann-Whitney tests. The horizontal misfits were categorized as underextended, equally extended, or overextended. Statistical analysis established differences between groups according to the chi-square test (α = .05). The mean vertical misfit was 5.9 ± 3.6 μm for CAD/CAM-fabricated zirconia, 1.2 ± 2.2 μm for CAD/CAM-fabricated cobalt-chromium frameworks, 7.6 ± 9.2 μm for copy-milling-fabricated zirconia frameworks, and 11.8 (9.8) μm for conventionally fabricated frameworks. The Mann-Whitney test revealed significant differences between all but the zirconia-fabricated frameworks. A significant association was observed between the horizontal misfits and the fabrication method. The percentage of horizontal misfits that were underextended and overextended was higher in milled zirconia (83.3%), CAD/CAM cobaltchromium (66.7%), cast cobalt-chromium (58.3%), and CAD/CAM zirconia (33.3%) frameworks. CAD/CAM-fabricated frameworks exhibit better vertical misfit and low variability compared with copy-milled and conventionally fabricated frameworks. The percentage of interfaces equally extended was higher when CAD/CAM and zirconia were used.

  11. Accuracy and precision in thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    Marshall, T.O.

    1984-01-01

    The question of accuracy and precision in thermoluminescent dosimetry, particularly in relation to lithium fluoride phosphor, is discussed. The more important sources of error, including those due to the detectors, the reader, annealing and dosemeter design, are identified and methods of reducing their effects on accuracy and precision to a minimum are given. Finally, the accuracy and precision achievable for three quite different applications are discussed, namely, for personal dosimetry, environmental monitoring and for the measurement of photon dose distributions in phantoms. (U.K.)

  12. Magnetic design and method of a superconducting magnet for muon g - 2/EDM precise measurements in a cylindrical volume with homogeneous magnetic field

    Science.gov (United States)

    Abe, M.; Murata, Y.; Iinuma, H.; Ogitsu, T.; Saito, N.; Sasaki, K.; Mibe, T.; Nakayama, H.

    2018-05-01

    A magnetic field design method of magneto-motive force (coil block (CB) and iron yoke) placements for g - 2/EDM measurements has been developed and a candidate placements were designed under superconducting limitations of current density 125 A/mm2 and maximum magnetic field on CBs less than 5.5 T. Placements of CBs and an iron yoke with poles were determined by tuning SVD (singular value decomposition) eigenmode strengths. The SVD was applied on a response matrix from magneto-motive forces to the magnetic fields in the muon storage region and two-dimensional (2D) placements of magneto-motive forces were designed by tuning the magnetic field eigenmode strengths obtained by the magnetic field. The tuning was performed iteratively. Magnetic field ripples in the azimuthal direction were minimized for the design. The candidate magnetic design had five CBs and an iron yoke with center iron poles. The magnet satisfied specifications of homogeneity (0.2 ppm peak-to-peak in 2D placements (the cylindrical coordinate of the radial position R and axial position Z) and less than 1.0 ppm ripples in the ring muon storage volume (0.318 m 0.0 m) for the spiral muon injection from the iron yoke at top.

  13. Applied research and development of neutron activation analysis - Development of the precise analysis method for plastic materials by the use of NAA

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kil Yong; Sim, Sang Kwan; Yoon, Yoon Yeol; Chun, Sang Ki [Korea Institute of Geology, Mining and Materials, Taejon (Korea)

    2000-04-01

    The demand for inorganic analysis of plastics has significantly increased in the fields of microelectronic, environmental, nuclear and resource recycling. The difficulties of chemical analysis methods have led to the application of NAA which has great advantages of non-destructivity, freedom from blank, high sensitivity. The goal of the present work is to optimize and to develop the NAA procedures for the inorganic analysis of plastics. Even though NAA has unique advantages, it has two problems for plastics. One is the contamination by metallic utensils during sample treatment and the other is destruction of sample ampule due to pressure build-up by hydrogen and methane gas formed from oxyhydrogenation reaction with neutrons. For the first problem, large plastics were cut to pieces after immersion in liquid nitrogen. And the second problem has been solved by making an aperture on top side of sample ampule. These research results have been applied to analysis of various plastic materials which were used in food, drug containers and toys for children. Moreover, korean irradiation rabbit could be produced by the application of the results and standard reference materials of plastics which were used for the analysis in XRF and ICP could be produced. 36 refs., 6 figs., 37 tabs (Author)

  14. Non-precision approach in manual mode

    Directory of Open Access Journals (Sweden)

    М. В. Коршунов

    2013-07-01

    Full Text Available Considered is the method of non-precision approach of an aircraft in the manual mode with a constant angle of path. Advantage of this method consists in the fact that the construction of approach with a constant angle of path provides the stable path of flight. It is also considered a detailed analysis of the possibility of the approach by the above-mentioned method. Conclusions contain recommendations regarding the use of the described method of non-precision approach during training flights.

  15. Precision electroweak measurements

    International Nuclear Information System (INIS)

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro e + e - and p anti p colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct m t measurements. Using the world's electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs' mass are discussed

  16. Electroweak precision tests

    International Nuclear Information System (INIS)

    Monteil, St.

    2009-12-01

    This document aims at summarizing a dozen of years of the author's research in High Energy Physics, in particular dealing with precision tests of the electroweak theory. Parity violating asymmetries measurements at LEP with the ALEPH detector together with global consistency checks of the Kobayashi-Maskawa paradigm within the CKM-fitter group are gathered in the first part of the document. The second part deals with the unpublished instrumental work about the design, tests, productions and commissioning of the elements of the Pre-Shower detector of the LHCb spectrometer at LHC. Physics perspectives with LHCb are eventually discussed as a conclusion. (author)

  17. Ultra-precision bearings

    CERN Document Server

    Wardle, F

    2015-01-01

    Ultra-precision bearings can achieve extreme accuracy of rotation, making them ideal for use in numerous applications across a variety of fields, including hard disk drives, roundness measuring machines and optical scanners. Ultraprecision Bearings provides a detailed review of the different types of bearing and their properties, as well as an analysis of the factors that influence motion error, stiffness and damping. Following an introduction to basic principles of motion error, each chapter of the book is then devoted to the basic principles and properties of a specific type of bearin

  18. Precise computation of the direct and indirect topographic effects of Helmert's 2nd method of condensation using SRTM30 digital elevation model

    Science.gov (United States)

    Wang, Y.

    2011-01-01

    The direct topographic effect (DTE) and indirect topographic effect (ITE) of Helmert's 2nd method of condensation are computed using the digital elevation model (DEM) SRTM30 in 30 arc-seconds globally. The computations assume a constant density of the topographic masses. Closed formulas are used in the inner zone of half degree, and Nagy's formulas are used in the innermost column to treat the singularity of integrals. To speed up the computations, 1-dimensional fast Fourier transform (1D FFT) is applied in outer zone computations. The computation accuracy is limited to 0.1 mGal and 0.1cm for the direct and indirect effect, respectively. The mean value and standard deviation of the DTE are -0.8 and ±7.6 mGal over land areas. The extreme value -274.3 mGal is located at latitude -13.579° and longitude 289.496°, at the height of 1426 meter in the Andes Mountains. The ITE is negative everywhere and has its minimum of -235.9 cm at the peak of Himalayas (8685 meter). The standard deviation and mean value over land areas are ±15.6 cm and -6.4 cm, respectively. Because the Stokes kernel does not contain the zero and first degree spherical harmonics, the mean value of the ITE can't be compensated through the remove-restore procedure under the Stokes-Helmert scheme, and careful treatment of the mean value in the ITE is required.

  19. Precision siting of a particle accelerator

    International Nuclear Information System (INIS)

    Cintra, Jorge Pimentel

    1996-01-01

    Precise location is a specific survey job that involves a high skilled work to avoid unrecoverable results at the project installation. As a function of the different process stages, different specifications can be applied, invoking different instruments: theodolite, measurement tape, distanciometer, invar wire. This paper, based on experience obtained at the installation of particle accelerator equipment, deals with general principles of precise location: tolerance definitions, increasing accuracy techniques, schedule of locations, sensitivity analysis, quality control methods. (author)

  20. Precision lifetime measurements

    International Nuclear Information System (INIS)

    Tanner, C.E.

    1994-01-01

    Precision measurements of atomic lifetimes provide important information necessary for testing atomic theory. The authors employ resonant laser excitation of a fast atomic beam to measure excited state lifetimes by observing the decay-in-flight of the emitted fluorescence. A similar technique was used by Gaupp, et al., who reported measurements with precisions of less than 0.2%. Their program includes lifetime measurements of the low lying p states in alkali and alkali like systems. Motivation for this work comes from a need to test the atomic many-body-perturbation theory (MBPT) that is necessary for interpretation of parity nonconservation experiments in atomic cesium. The authors have measured the cesium 6p 2 P 1/2 and 6p 2 P 3/2 state lifetimes to be 34.934±0.094 ns and 30.499±0.070 ns respectively. With minor changes to the apparatus, they have extended their measurements to include the lithium 2p 2 P 1/2 and 2p 2 P 3/2 states

  1. Fundamentals of precision medicine

    Science.gov (United States)

    Divaris, Kimon

    2018-01-01

    Imagine a world where clinicians make accurate diagnoses and provide targeted therapies to their patients according to well-defined, biologically-informed disease subtypes, accounting for individual differences in genetic make-up, behaviors, cultures, lifestyles and the environment. This is not as utopic as it may seem. Relatively recent advances in science and technology have led to an explosion of new information on what underlies health and what constitutes disease. These novel insights emanate from studies of the human genome and microbiome, their associated transcriptomes, proteomes and metabolomes, as well as epigenomics and exposomics—such ‘omics data can now be generated at unprecedented depth and scale, and at rapidly decreasing cost. Making sense and integrating these fundamental information domains to transform health care and improve health remains a challenge—an ambitious, laudable and high-yield goal. Precision dentistry is no longer a distant vision; it is becoming part of the rapidly evolving present. Insights from studies of the human genome and microbiome, their associated transcriptomes, proteomes and metabolomes, and epigenomics and exposomics have reached an unprecedented depth and scale. Much more needs to be done, however, for the realization of precision medicine in the oral health domain. PMID:29227115

  2. Precision measurements in supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Johnathan Lee [Stanford Univ., CA (United States)

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  3. Precision Joining Center

    Science.gov (United States)

    Powell, J. W.; Westphal, D. A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10-12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of U.S. industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG&G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  4. [Precision Nursing: Individual-Based Knowledge Translation].

    Science.gov (United States)

    Chiang, Li-Chi; Yeh, Mei-Ling; Su, Sui-Lung

    2016-12-01

    U.S. President Obama announced a new era of precision medicine in the Precision Medicine Initiative (PMI). This initiative aims to accelerate the progress of personalized medicine in light of individual requirements for prevention and treatment in order to improve the state of individual and public health. The recent and dramatic development of large-scale biologic databases (such as the human genome sequence), powerful methods for characterizing patients (such as genomics, microbiome, diverse biomarkers, and even pharmacogenomics), and computational tools for analyzing big data are maximizing the potential benefits of precision medicine. Nursing science should follow and keep pace with this trend in order to develop empirical knowledge and expertise in the area of personalized nursing care. Nursing scientists must encourage, examine, and put into practice innovative research on precision nursing in order to provide evidence-based guidance to clinical practice. The applications in personalized precision nursing care include: explanations of personalized information such as the results of genetic testing; patient advocacy and support; anticipation of results and treatment; ongoing chronic monitoring; and support for shared decision-making throughout the disease trajectory. Further, attention must focus on the family and the ethical implications of taking a personalized approach to care. Nurses will need to embrace the paradigm shift to precision nursing and work collaboratively across disciplines to provide the optimal personalized care to patients. If realized, the full potential of precision nursing will provide the best chance for good health for all.

  5. Factors affecting the precision of bone mineral measurements

    International Nuclear Information System (INIS)

    Cormack, J.; Evil, C.A.

    1990-01-01

    This paper discusses some statistical aspects of absorptiometric bone mineral measurements. In particular, the contribution of photon counting statistics to overall precision is estimated, and methods available for carrying out statistical comparisons of bone loss and determining their precision are reviewed. The use of replicate measurements as a means of improving measurement precision is also discussed. 11 refs

  6. Precision Medicine in Cancer Treatment

    Science.gov (United States)

    Precision medicine helps doctors select cancer treatments that are most likely to help patients based on a genetic understanding of their disease. Learn about the promise of precision medicine and the role it plays in cancer treatment.

  7. Filling the toolbox of precision breeding methods

    NARCIS (Netherlands)

    Schaart, J.G.; Wiel, van de C.C.M.; Lotz, L.A.P.; Smulders, M.J.M.

    2016-01-01

    Plant breeding has
    resulted in numerous
    high-quality crop
    varieties being
    cultivated nowadays.
    Breeding based on
    crossing and selection
    remains an important
    and ongoing activity for
    crop improvement, but
    needs innovation to be
    able to address

  8. Mechanics and Physics of Precise Vacuum Mechanisms

    CERN Document Server

    Deulin, E. A; Panfilov, Yu V; Nevshupa, R. A

    2010-01-01

    In this book the Russian expertise in the field of the design of precise vacuum mechanics is summarized. A wide range of physical applications of mechanism design in electronic, optical-electronic, chemical, and aerospace industries is presented in a comprehensible way. Topics treated include the method of microparticles flow regulation and its determination in vacuum equipment and mechanisms of electronics; precise mechanisms of nanoscale precision based on magnetic and electric rheology; precise harmonic rotary and not-coaxial nut-screw linear motion vacuum feedthroughs with technical parameters considered the best in the world; elastically deformed vacuum motion feedthroughs without friction couples usage; the computer system of vacuum mechanisms failure predicting. This English edition incorporates a number of features which should improve its usefulness as a textbook without changing the basic organization or the general philosophy of presentation of the subject matter of the original Russian work. Exper...

  9. Precision luminosity measurements at LHCb

    CERN Document Server

    Aaij, Roel; Adinolfi, Marco; Affolder, Anthony; Ajaltouni, Ziad; Akar, Simon; Albrecht, Johannes; Alessio, Federico; Alexander, Michael; Ali, Suvayu; Alkhazov, Georgy; Alvarez Cartelle, Paula; Alves Jr, Antonio Augusto; Amato, Sandra; Amerio, Silvia; Amhis, Yasmine; An, Liupan; Anderlini, Lucio; Anderson, Jonathan; Andreassen, Rolf; Andreotti, Mirco; Andrews, Jason; Appleby, Robert; Aquines Gutierrez, Osvaldo; Archilli, Flavio; Artamonov, Alexander; Artuso, Marina; Aslanides, Elie; Auriemma, Giulio; Baalouch, Marouen; Bachmann, Sebastian; Back, John; Badalov, Alexey; Baesso, Clarissa; Baldini, Wander; Barlow, Roger; Barschel, Colin; Barsuk, Sergey; Barter, William; Batozskaya, Varvara; Battista, Vincenzo; Bay, Aurelio; Beaucourt, Leo; Beddow, John; Bedeschi, Franco; Bediaga, Ignacio; Belogurov, Sergey; Belous, Konstantin; Belyaev, Ivan; Ben-Haim, Eli; Bencivenni, Giovanni; Benson, Sean; Benton, Jack; Berezhnoy, Alexander; Bernet, Roland; Bettler, Marc-Olivier; van Beuzekom, Martinus; Bien, Alexander; Bifani, Simone; Bird, Thomas; Bizzeti, Andrea; Bjørnstad, Pål Marius; Blake, Thomas; Blanc, Frédéric; Blouw, Johan; Blusk, Steven; Bocci, Valerio; Bondar, Alexander; Bondar, Nikolay; Bonivento, Walter; Borghi, Silvia; Borgia, Alessandra; Borsato, Martino; Bowcock, Themistocles; Bowen, Espen Eie; Bozzi, Concezio; Brambach, Tobias; Bressieux, Joël; Brett, David; Britsch, Markward; Britton, Thomas; Brodzicka, Jolanta; Brook, Nicholas; Brown, Henry; Bursche, Albert; Buytaert, Jan; Cadeddu, Sandro; Calabrese, Roberto; Calvi, Marta; Calvo Gomez, Miriam; Campana, Pierluigi; Campora Perez, Daniel; Carbone, Angelo; Carboni, Giovanni; Cardinale, Roberta; Cardini, Alessandro; Carson, Laurence; Carvalho Akiba, Kazuyoshi; Casse, Gianluigi; Cassina, Lorenzo; Castillo Garcia, Lucia; Cattaneo, Marco; Cauet, Christophe; Cenci, Riccardo; Charles, Matthew; Charpentier, Philippe; Chefdeville, Maximilien; Chen, Shanzhen; Cheung, Shu-Faye; Chiapolini, Nicola; Chrzaszcz, Marcin; Ciba, Krzystof; Cid Vidal, Xabier; Ciezarek, Gregory; Clarke, Peter; Clemencic, Marco; Cliff, Harry; Closier, Joel; Coco, Victor; Cogan, Julien; Cogneras, Eric; Cojocariu, Lucian; Collazuol, Gianmaria; Collins, Paula; Comerma-Montells, Albert; Contu, Andrea; Cook, Andrew; Coombes, Matthew; Coquereau, Samuel; Corti, Gloria; Corvo, Marco; Counts, Ian; Couturier, Benjamin; Cowan, Greig; Craik, Daniel Charles; Cruz Torres, Melissa Maria; Cunliffe, Samuel; Currie, Robert; D'Ambrosio, Carmelo; Dalseno, Jeremy; David, Pascal; David, Pieter; Davis, Adam; De Bruyn, Kristof; De Capua, Stefano; De Cian, Michel; De Miranda, Jussara; De Paula, Leandro; De Silva, Weeraddana; De Simone, Patrizia; Dean, Cameron Thomas; Decamp, Daniel; Deckenhoff, Mirko; Del Buono, Luigi; Déléage, Nicolas; Derkach, Denis; Deschamps, Olivier; Dettori, Francesco; Di Canto, Angelo; Dijkstra, Hans; Donleavy, Stephanie; Dordei, Francesca; Dorigo, Mirco; Dosil Suárez, Alvaro; Dossett, David; Dovbnya, Anatoliy; Dreimanis, Karlis; Dujany, Giulio; Dupertuis, Frederic; Durante, Paolo; Dzhelyadin, Rustem; Dziurda, Agnieszka; Dzyuba, Alexey; Easo, Sajan; Egede, Ulrik; Egorychev, Victor; Eidelman, Semen; Eisenhardt, Stephan; Eitschberger, Ulrich; Ekelhof, Robert; Eklund, Lars; El Rifai, Ibrahim; Elsasser, Christian; Ely, Scott; Esen, Sevda; Evans, Hannah Mary; Evans, Timothy; Falabella, Antonio; Färber, Christian; Farinelli, Chiara; Farley, Nathanael; Farry, Stephen; Fay, Robert; Ferguson, Dianne; Fernandez Albor, Victor; Ferreira Rodrigues, Fernando; Ferro-Luzzi, Massimiliano; Filippov, Sergey; Fiore, Marco; Fiorini, Massimiliano; Firlej, Miroslaw; Fitzpatrick, Conor; Fiutowski, Tomasz; Fol, Philip; Fontana, Marianna; Fontanelli, Flavio; Forty, Roger; Francisco, Oscar; Frank, Markus; Frei, Christoph; Frosini, Maddalena; Fu, Jinlin; Furfaro, Emiliano; Gallas Torreira, Abraham; Galli, Domenico; Gallorini, Stefano; Gambetta, Silvia; Gandelman, Miriam; Gandini, Paolo; Gao, Yuanning; García Pardiñas, Julián; Garofoli, Justin; Garra Tico, Jordi; Garrido, Lluis; Gascon, David; Gaspar, Clara; Gauld, Rhorry; Gavardi, Laura; Geraci, Angelo; Gersabeck, Evelina; Gersabeck, Marco; Gershon, Timothy; Ghez, Philippe; Gianelle, Alessio; Gianì, Sebastiana; Gibson, Valerie; Giubega, Lavinia-Helena; Gligorov, V.V.; Göbel, Carla; Golubkov, Dmitry; Golutvin, Andrey; Gomes, Alvaro; Gotti, Claudio; Grabalosa Gándara, Marc; Graciani Diaz, Ricardo; Granado Cardoso, Luis Alberto; Graugés, Eugeni; Graziani, Giacomo; Grecu, Alexandru; Greening, Edward; Gregson, Sam; Griffith, Peter; Grillo, Lucia; Grünberg, Oliver; Gui, Bin; Gushchin, Evgeny; Guz, Yury; Gys, Thierry; Hadjivasiliou, Christos; Haefeli, Guido; Haen, Christophe; Haines, Susan; Hall, Samuel; Hamilton, Brian; Hampson, Thomas; Han, Xiaoxue; Hansmann-Menzemer, Stephanie; Harnew, Neville; Harnew, Samuel; Harrison, Jonathan; He, Jibo; Head, Timothy; Heijne, Veerle; Hennessy, Karol; Henrard, Pierre; Henry, Louis; Hernando Morata, Jose Angel; van Herwijnen, Eric; Heß, Miriam; Hicheur, Adlène; Hill, Donal; Hoballah, Mostafa; Hombach, Christoph; Hulsbergen, Wouter; Hunt, Philip; Hussain, Nazim; Hutchcroft, David; Hynds, Daniel; Idzik, Marek; Ilten, Philip; Jacobsson, Richard; Jaeger, Andreas; Jalocha, Pawel; Jans, Eddy; Jaton, Pierre; Jawahery, Abolhassan; Jing, Fanfan; John, Malcolm; Johnson, Daniel; Jones, Christopher; Joram, Christian; Jost, Beat; Jurik, Nathan; Kandybei, Sergii; Kanso, Walaa; Karacson, Matthias; Karbach, Moritz; Karodia, Sarah; Kelsey, Matthew; Kenyon, Ian; Ketel, Tjeerd; Khanji, Basem; Khurewathanakul, Chitsanu; Klaver, Suzanne; Klimaszewski, Konrad; Kochebina, Olga; Kolpin, Michael; Komarov, Ilya; Koopman, Rose; Koppenburg, Patrick; Korolev, Mikhail; Kozlinskiy, Alexandr; Kravchuk, Leonid; Kreplin, Katharina; Kreps, Michal; Krocker, Georg; Krokovny, Pavel; Kruse, Florian; Kucewicz, Wojciech; Kucharczyk, Marcin; Kudryavtsev, Vasily; Kurek, Krzysztof; Kvaratskheliya, Tengiz; La Thi, Viet Nga; Lacarrere, Daniel; Lafferty, George; Lai, Adriano; Lambert, Dean; Lambert, Robert W; Lanfranchi, Gaia; Langenbruch, Christoph; Langhans, Benedikt; Latham, Thomas; Lazzeroni, Cristina; Le Gac, Renaud; van Leerdam, Jeroen; Lees, Jean-Pierre; Lefèvre, Regis; Leflat, Alexander; Lefrançois, Jacques; Leo, Sabato; Leroy, Olivier; Lesiak, Tadeusz; Leverington, Blake; Li, Yiming; Likhomanenko, Tatiana; Liles, Myfanwy; Lindner, Rolf; Linn, Christian; Lionetto, Federica; Liu, Bo; Lohn, Stefan; Longstaff, Iain; Lopes, Jose; Lopez-March, Neus; Lowdon, Peter; Lu, Haiting; Lucchesi, Donatella; Luo, Haofei; Lupato, Anna; Luppi, Eleonora; Lupton, Oliver; Machefert, Frederic; Machikhiliyan, Irina V; Maciuc, Florin; Maev, Oleg; Malde, Sneha; Malinin, Alexander; Manca, Giulia; Mancinelli, Giampiero; Mapelli, Alessandro; Maratas, Jan; Marchand, Jean François; Marconi, Umberto; Marin Benito, Carla; Marino, Pietro; Märki, Raphael; Marks, Jörg; Martellotti, Giuseppe; Martens, Aurelien; Martín Sánchez, Alexandra; Martinelli, Maurizio; Martinez Santos, Diego; Martinez Vidal, Fernando; Martins Tostes, Danielle; Massafferri, André; Matev, Rosen; Mathe, Zoltan; Matteuzzi, Clara; Maurin, Brice; Mazurov, Alexander; McCann, Michael; McCarthy, James; McNab, Andrew; McNulty, Ronan; McSkelly, Ben; Meadows, Brian; Meier, Frank; Meissner, Marco; Merk, Marcel; Milanes, Diego Alejandro; Minard, Marie-Noelle; Moggi, Niccolò; Molina Rodriguez, Josue; Monteil, Stephane; Morandin, Mauro; Morawski, Piotr; Mordà, Alessandro; Morello, Michael Joseph; Moron, Jakub; Morris, Adam Benjamin; Mountain, Raymond; Muheim, Franz; Müller, Katharina; Mussini, Manuel; Muster, Bastien; Naik, Paras; Nakada, Tatsuya; Nandakumar, Raja; Nasteva, Irina; Needham, Matthew; Neri, Nicola; Neubert, Sebastian; Neufeld, Niko; Neuner, Max; Nguyen, Anh Duc; Nguyen, Thi-Dung; Nguyen-Mau, Chung; Nicol, Michelle; Niess, Valentin; Niet, Ramon; Nikitin, Nikolay; Nikodem, Thomas; Novoselov, Alexey; O'Hanlon, Daniel Patrick; Oblakowska-Mucha, Agnieszka; Obraztsov, Vladimir; Oggero, Serena; Ogilvy, Stephen; Okhrimenko, Oleksandr; Oldeman, Rudolf; Onderwater, Gerco; Orlandea, Marius; Otalora Goicochea, Juan Martin; Owen, Patrick; Oyanguren, Maria Arantza; Pal, Bilas Kanti; Palano, Antimo; Palombo, Fernando; Palutan, Matteo; Panman, Jacob; Papanestis, Antonios; Pappagallo, Marco; Pappalardo, Luciano; Parkes, Christopher; Parkinson, Christopher John; Passaleva, Giovanni; Patel, Girish; Patel, Mitesh; Patrignani, Claudia; Pearce, Alex; Pellegrino, Antonio; Pepe Altarelli, Monica; Perazzini, Stefano; Perret, Pascal; Perrin-Terrin, Mathieu; Pescatore, Luca; Pesen, Erhan; Pessina, Gianluigi; Petridis, Konstantin; Petrolini, Alessandro; Picatoste Olloqui, Eduardo; Pietrzyk, Boleslaw; Pilař, Tomas; Pinci, Davide; Pistone, Alessandro; Playfer, Stephen; Plo Casasus, Maximo; Polci, Francesco; Poluektov, Anton; Polycarpo, Erica; Popov, Alexander; Popov, Dmitry; Popovici, Bogdan; Potterat, Cédric; Price, Eugenia; Price, Joseph David; Prisciandaro, Jessica; Pritchard, Adrian; Prouve, Claire; Pugatch, Valery; Puig Navarro, Albert; Punzi, Giovanni; Qian, Wenbin; Rachwal, Bartolomiej; Rademacker, Jonas; Rakotomiaramanana, Barinjaka; Rama, Matteo; Rangel, Murilo; Raniuk, Iurii; Rauschmayr, Nathalie; Raven, Gerhard; Redi, Federico; Reichert, Stefanie; Reid, Matthew; dos Reis, Alberto; Ricciardi, Stefania; Richards, Sophie; Rihl, Mariana; Rinnert, Kurt; Rives Molina, Vincente; Robbe, Patrick; Rodrigues, Ana Barbara; Rodrigues, Eduardo; Rodriguez Perez, Pablo; Roiser, Stefan; Romanovsky, Vladimir; Romero Vidal, Antonio; Rotondo, Marcello; Rouvinet, Julien; Ruf, Thomas; Ruiz, Hugo; Ruiz Valls, Pablo; Saborido Silva, Juan Jose; Sagidova, Naylya; Sail, Paul; Saitta, Biagio; Salustino Guimaraes, Valdir; Sanchez Mayordomo, Carlos; Sanmartin Sedes, Brais; Santacesaria, Roberta; Santamarina Rios, Cibran; Santovetti, Emanuele; Sarti, Alessio; Satriano, Celestina; Satta, Alessia; Saunders, Daniel Martin; Savrina, Darya; Schiller, Manuel; Schindler, Heinrich; Schlupp, Maximilian; Schmelling, Michael; Schmidt, Burkhard; Schneider, Olivier; Schopper, Andreas; Schubiger, Maxime; Schune, Marie Helene; Schwemmer, Rainer; Sciascia, Barbara; Sciubba, Adalberto; Semennikov, Alexander; Sepp, Indrek; Serra, Nicola; Serrano, Justine; Sestini, Lorenzo; Seyfert, Paul; Shapkin, Mikhail; Shapoval, Illya; Shcheglov, Yury; Shears, Tara; Shekhtman, Lev; Shevchenko, Vladimir; Shires, Alexander; Silva Coutinho, Rafael; Simi, Gabriele; Sirendi, Marek; Skidmore, Nicola; Skwarnicki, Tomasz; Smith, Anthony; Smith, Edmund; Smith, Eluned; Smith, Jackson; Smith, Mark; Snoek, Hella; Sokoloff, Michael; Soler, Paul; Soomro, Fatima; Souza, Daniel; Souza De Paula, Bruno; Spaan, Bernhard; Sparkes, Ailsa; Spradlin, Patrick; Sridharan, Srikanth; Stagni, Federico; Stahl, Marian; Stahl, Sascha; Steinkamp, Olaf; Stenyakin, Oleg; Stevenson, Scott; Stoica, Sabin; Stone, Sheldon; Storaci, Barbara; Stracka, Simone; Straticiuc, Mihai; Straumann, Ulrich; Stroili, Roberto; Subbiah, Vijay Kartik; Sun, Liang; Sutcliffe, William; Swientek, Krzysztof; Swientek, Stefan; Syropoulos, Vasileios; Szczekowski, Marek; Szczypka, Paul; Szumlak, Tomasz; T'Jampens, Stephane; Teklishyn, Maksym; Tellarini, Giulia; Teubert, Frederic; Thomas, Christopher; Thomas, Eric; van Tilburg, Jeroen; Tisserand, Vincent; Tobin, Mark; Tolk, Siim; Tomassetti, Luca; Tonelli, Diego; Topp-Joergensen, Stig; Torr, Nicholas; Tournefier, Edwige; Tourneur, Stephane; Tran, Minh Tâm; Tresch, Marco; Trisovic, Ana; Tsaregorodtsev, Andrei; Tsopelas, Panagiotis; Tuning, Niels; Ubeda Garcia, Mario; Ukleja, Artur; Ustyuzhanin, Andrey; Uwer, Ulrich; Vacca, Claudia; Vagnoni, Vincenzo; Valenti, Giovanni; Vallier, Alexis; Vazquez Gomez, Ricardo; Vazquez Regueiro, Pablo; Vázquez Sierra, Carlos; Vecchi, Stefania; Velthuis, Jaap; Veltri, Michele; Veneziano, Giovanni; Vesterinen, Mika; Viaud, Benoit; Vieira, Daniel; Vieites Diaz, Maria; Vilasis-Cardona, Xavier; Vollhardt, Achim; Volyanskyy, Dmytro; Voong, David; Vorobyev, Alexey; Vorobyev, Vitaly; Voß, Christian; de Vries, Jacco; Waldi, Roland; Wallace, Charlotte; Wallace, Ronan; Walsh, John; Wandernoth, Sebastian; Wang, Jianchun; Ward, David; Watson, Nigel; Websdale, David; Whitehead, Mark; Wicht, Jean; Wiedner, Dirk; Wilkinson, Guy; Williams, Matthew; Williams, Mike; Wilschut, Hans; Wilson, Fergus; Wimberley, Jack; Wishahi, Julian; Wislicki, Wojciech; Witek, Mariusz; Wormser, Guy; Wotton, Stephen; Wright, Simon; Wyllie, Kenneth; Xie, Yuehong; Xing, Zhou; Xu, Zhirui; Yang, Zhenwei; Yuan, Xuhao; Yushchenko, Oleg; Zangoli, Maria; Zavertyaev, Mikhail; Zhang, Liming; Zhang, Wen Chao; Zhang, Yanxi; Zhelezov, Alexey; Zhokhov, Anatoly; Zhong, Liang; Zvyagin, Alexander

    2014-12-05

    Measuring cross-sections at the LHC requires the luminosity to be determined accurately at each centre-of-mass energy $\\sqrt{s}$. In this paper results are reported from the luminosity calibrations carried out at the LHC interaction point 8 with the LHCb detector for $\\sqrt{s}$ = 2.76, 7 and 8 TeV (proton-proton collisions) and for $\\sqrt{s_{NN}}$ = 5 TeV (proton-lead collisions). Both the "van der Meer scan" and "beam-gas imaging" luminosity calibration methods were employed. It is observed that the beam density profile cannot always be described by a function that is factorizable in the two transverse coordinates. The introduction of a two-dimensional description of the beams improves significantly the consistency of the results. For proton-proton interactions at $\\sqrt{s}$ = 8 TeV a relative precision of the luminosity calibration of 1.47% is obtained using van der Meer scans and 1.43% using beam-gas imaging, resulting in a combined precision of 1.12%. Applying the calibration to the full data set determin...

  10. Antihydrogen production and precision experiments

    International Nuclear Information System (INIS)

    Nieto, M.M.; Goldman, T.; Holzscheiter, M.H.

    1996-01-01

    The study of CPT invariance with the highest achievable precision in all particle sectors is of fundamental importance for physics. Equally important is the question of the gravitational acceleration of antimatter. In recent years, impressive progress has been achieved in capturing antiprotons in specially designed Penning traps, in cooling them to energies of a few milli-electron volts, and in storing them for hours in a small volume of space. Positrons have been accumulated in large numbers in similar traps, and low energy positron or positronium beams have been generated. Finally, steady progress has been made in trapping and cooling neutral atoms. Thus the ingredients to form antihydrogen at rest are at hand. Once antihydrogen atoms have been captured at low energy, spectroscopic methods can be applied to interrogate their atomic structure with extremely high precision and compare it to its normal matter counterpart, the hydrogen atom. Especially the 1S-2S transition, with a lifetime of the excited state of 122 msec and thereby a natural linewidth of 5 parts in 10 16 , offers in principle the possibility to directly compare matter and antimatter properties at a level of 1 part in 10 16

  11. Precision Medicine in Cardiovascular Diseases

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2017-02-01

    Full Text Available Since President Obama announced the Precision Medicine Initiative in the United States, more and more attention has been paid to precision medicine. However, clinicians have already used it to treat conditions such as cancer. Many cardiovascular diseases have a familial presentation, and genetic variants are associated with the prevention, diagnosis, and treatment of cardiovascular diseases, which are the basis for providing precise care to patients with cardiovascular diseases. Large-scale cohorts and multiomics are critical components of precision medicine. Here we summarize the application of precision medicine to cardiovascular diseases based on cohort and omic studies, and hope to elicit discussion about future health care.

  12. Precisely predictable Dirac observables

    CERN Document Server

    Cordes, Heinz Otto

    2006-01-01

    This work presents a "Clean Quantum Theory of the Electron", based on Dirac’s equation. "Clean" in the sense of a complete mathematical explanation of the well known paradoxes of Dirac’s theory, and a connection to classical theory, including the motion of a magnetic moment (spin) in the given field, all for a charged particle (of spin ½) moving in a given electromagnetic field. This theory is relativistically covariant, and it may be regarded as a mathematically consistent quantum-mechanical generalization of the classical motion of such a particle, à la Newton and Einstein. Normally, our fields are time-independent, but also discussed is the time-dependent case, where slightly different features prevail. A "Schroedinger particle", such as a light quantum, experiences a very different (time-dependent) "Precise Predictablity of Observables". An attempt is made to compare both cases. There is not the Heisenberg uncertainty of location and momentum; rather, location alone possesses a built-in uncertainty ...

  13. Prompt and Precise Prototyping

    Science.gov (United States)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  14. Precisely Tracking Childhood Death.

    Science.gov (United States)

    Farag, Tamer H; Koplan, Jeffrey P; Breiman, Robert F; Madhi, Shabir A; Heaton, Penny M; Mundel, Trevor; Ordi, Jaume; Bassat, Quique; Menendez, Clara; Dowell, Scott F

    2017-07-01

    Little is known about the specific causes of neonatal and under-five childhood death in high-mortality geographic regions due to a lack of primary data and dependence on inaccurate tools, such as verbal autopsy. To meet the ambitious new Sustainable Development Goal 3.2 to eliminate preventable child mortality in every country, better approaches are needed to precisely determine specific causes of death so that prevention and treatment interventions can be strengthened and focused. Minimally invasive tissue sampling (MITS) is a technique that uses needle-based postmortem sampling, followed by advanced histopathology and microbiology to definitely determine cause of death. The Bill & Melinda Gates Foundation is supporting a new surveillance system called the Child Health and Mortality Prevention Surveillance network, which will determine cause of death using MITS in combination with other information, and yield cause-specific population-based mortality rates, eventually in up to 12-15 sites in sub-Saharan Africa and south Asia. However, the Gates Foundation funding alone is not enough. We call on governments, other funders, and international stakeholders to expand the use of pathology-based cause of death determination to provide the information needed to end preventable childhood mortality.

  15. Precision cosmology and the landscape

    International Nuclear Information System (INIS)

    Bousso, Raphael; Bousso, Raphael

    2006-01-01

    After reviewing the cosmological constant problem--why is Lambda not huge?--I outline the two basic approaches that had emerged by the late 1980s, and note that each made a clear prediction. Precision cosmological experiments now indicate that the cosmological constant is nonzero. This result strongly favors the environmental approach, in which vacuum energy can vary discretely among widely separated regions in the universe. The need to explain this variation from first principles constitutes an observational constraint on fundamental theory. I review arguments that string theory satisfies this constraint, as it contains a dense discretuum of metastable vacua. The enormous landscape of vacua calls for novel, statistical methods of deriving predictions, and it prompts us to reexamine our description of spacetime on the largest scales. I discuss the effects of cosmological dynamics, and I speculate that weighting vacua by their entropy production may allow for prior-free predictions that do not resort to explicitly anthropic arguments

  16. The economic case for precision medicine.

    Science.gov (United States)

    Gavan, Sean P; Thompson, Alexander J; Payne, Katherine

    2018-01-01

    Introduction : The advancement of precision medicine into routine clinical practice has been highlighted as an agenda for national and international health care policy. A principle barrier to this advancement is in meeting requirements of the payer or reimbursement agency for health care. This special report aims to explain the economic case for precision medicine, by accounting for the explicit objectives defined by decision-makers responsible for the allocation of limited health care resources. Areas covered : The framework of cost-effectiveness analysis, a method of economic evaluation, is used to describe how precision medicine can, in theory, exploit identifiable patient-level heterogeneity to improve population health outcomes and the relative cost-effectiveness of health care. Four case studies are used to illustrate potential challenges when demonstrating the economic case for a precision medicine in practice. Expert commentary : The economic case for a precision medicine should be considered at an early stage during its research and development phase. Clinical and economic evidence can be generated iteratively and should be in alignment with the objectives and requirements of decision-makers. Programmes of further research, to demonstrate the economic case of a precision medicine, can be prioritized by the extent that they reduce the uncertainty expressed by decision-makers.

  17. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    Science.gov (United States)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  18. Precision medicine in myasthenia graves: begin from the data precision

    Science.gov (United States)

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  19. Microhartree precision in density functional theory calculations

    Science.gov (United States)

    Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia

    2018-04-01

    To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.

  20. Precisão dos métodos refratométricos para análise de umidade em mel Precision of the refractometric methods of moisture honey analysis

    Directory of Open Access Journals (Sweden)

    Cristiane Bonaldi Cano

    2007-06-01

    Full Text Available Para a determinação dos teores de umidade de mel, a legislação brasileira adota o método refratométrico proposto pela AOAC. No entanto, em outro trabalho, os autores observaram que a cristalização interferia na medida do índice de refração quando a amostra de mel se encontrava cristalizada e, portanto, eram obtidos teores de umidade superestimados. A Comissão Européia de Mel (EHC adota outro método refratométrico, que usa um pré-tratamento da amostra quando esta estiver cristalizada. Assim, o objetivo deste trabalho foi comparar a precisão destes métodos refratométricos por diferentes técnicas estatísticas e estabelecer então o procedimento mais adequado para a análise de umidade em mel. Os resultados da análise dos teste t no nível de 95,0% de confiança para os teores médios de umidade das amostras de méis sugeriram que existiam diferenças significativas entre os dois métodos refratométricos (AOAC e EHC somente para as amostras cristalizadas. A análise dos desvios padrão pela aplicação do teste F e construção de intervalos de confiança mostrou que o método da EHC foi mais preciso que o da AOAC para amostras de méis cristalizados. Desta forma, pode-se sugerir a adoção pela legislação brasileira do método refratométrico da EHC, como método oficial por este não apresentar erros sistemáticos.For the determination of the moisture contents of honey, the Brazilian legislation adopts the refractometric method proposed by AOAC. However, in other work, the authors observed that the crystallization interferes in the refractive index measurement when the honey sample if encountered crystallized and, therefore, moisture contents overestimated were obtained. The European Honey Commission (EHC, adopt other refractometric method that use a pre-treatment of sample when this one was crystallized. So, the objective of this work was to compare the precision of these refractometric methods by different statistical

  1. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  2. Sampling plans in attribute mode with multiple levels of precision

    International Nuclear Information System (INIS)

    Franklin, M.

    1986-01-01

    This paper describes a method for deriving sampling plans for nuclear material inventory verification. The method presented is different from the classical approach which envisages two levels of measurement precision corresponding to NDA and DA. In the classical approach the precisions of the two measurement methods are taken as fixed parameters. The new approach is based on multiple levels of measurement precision. The design of the sampling plan consists of choosing the number of measurement levels, the measurement precision to be used at each level and the sample size to be used at each level

  3. High precision detector robot arm system

    Science.gov (United States)

    Shu, Deming; Chu, Yong

    2017-01-31

    A method and high precision robot arm system are provided, for example, for X-ray nanodiffraction with an X-ray nanoprobe. The robot arm system includes duo-vertical-stages and a kinematic linkage system. A two-dimensional (2D) vertical plane ultra-precision robot arm supporting an X-ray detector provides positioning and manipulating of the X-ray detector. A vertical support for the 2D vertical plane robot arm includes spaced apart rails respectively engaging a first bearing structure and a second bearing structure carried by the 2D vertical plane robot arm.

  4. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  5. Collaborative Study of an Indirect Enzymatic Method for the Simultaneous Analysis of 3-MCPD, 2-MCPD, and Glycidyl Esters in Edible Oils.

    Science.gov (United States)

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Egawa, Yoshitsugu; Kido, Hirotsugu; Kitta, Tadashi; Miyashita, Takashi; Nezu, Toru; Nohara, Hidenori; Sano, Takashi; Takahashi, Yukinari; Taniguchi, Hideji; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2016-07-01

    A collaborative study was conducted to evaluate an indirect enzymatic method for the analysis of fatty acid esters of 3-monochloro-1,2-propanediol (3-MCPD), 2-monochloro-1,3-propanediol (2-MCPD), and glycidol (Gly) in edible oils and fats. The method is characterized by the use of Candida rugosa lipase, which hydrolyzes the esters at room temperature in 30 min. Hydrolysis and bromination steps convert esters of 3-MCPD, 2-MCPD, and glycidol to free 3-MCPD, 2-MCPD, and 3-monobromo-1,2-propanediol, respectively, which are then derivatized with phenylboronic acid, and analyzed by gas chromatography-mass spectrometry. In a collaborative study involving 13 laboratories, liquid palm, solid palm, rapeseed, and rice bran oils spiked with 0.5-4.4 mg/kg of esters of 3-MCPD, 2-MCPD, and Gly were analyzed in duplicate. The repeatability (RSDr) were 3-MCPD, 2-MCPD, and Gly esters in edible oils.

  6. Precision medicine for nurses: 101.

    Science.gov (United States)

    Lemoine, Colleen

    2014-05-01

    To introduce the key concepts and terms associated with precision medicine and support understanding of future developments in the field by providing an overview and history of precision medicine, related ethical considerations, and nursing implications. Current nursing, medical and basic science literature. Rapid progress in understanding the oncogenic drivers associated with cancer is leading to a shift toward precision medicine, where treatment is based on targeting specific genetic and epigenetic alterations associated with a particular cancer. Nurses will need to embrace the paradigm shift to precision medicine, expend the effort necessary to learn the essential terminology, concepts and principles, and work collaboratively with physician colleagues to best position our patients to maximize the potential that precision medicine can offer. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Advanced bioanalytics for precision medicine.

    Science.gov (United States)

    Roda, Aldo; Michelini, Elisa; Caliceti, Cristiana; Guardigli, Massimo; Mirasoli, Mara; Simoni, Patrizia

    2018-01-01

    Precision medicine is a new paradigm that combines diagnostic, imaging, and analytical tools to produce accurate diagnoses and therapeutic interventions tailored to the individual patient. This approach stands in contrast to the traditional "one size fits all" concept, according to which researchers develop disease treatments and preventions for an "average" patient without considering individual differences. The "one size fits all" concept has led to many ineffective or inappropriate treatments, especially for pathologies such as Alzheimer's disease and cancer. Now, precision medicine is receiving massive funding in many countries, thanks to its social and economic potential in terms of improved disease prevention, diagnosis, and therapy. Bioanalytical chemistry is critical to precision medicine. This is because identifying an appropriate tailored therapy requires researchers to collect and analyze information on each patient's specific molecular biomarkers (e.g., proteins, nucleic acids, and metabolites). In other words, precision diagnostics is not possible without precise bioanalytical chemistry. This Trend article highlights some of the most recent advances, including massive analysis of multilayer omics, and new imaging technique applications suitable for implementing precision medicine. Graphical abstract Precision medicine combines bioanalytical chemistry, molecular diagnostics, and imaging tools for performing accurate diagnoses and selecting optimal therapies for each patient.

  8. Precision Oncology: Between Vaguely Right and Precisely Wrong.

    Science.gov (United States)

    Brock, Amy; Huang, Sui

    2017-12-01

    Precision Oncology seeks to identify and target the mutation that drives a tumor. Despite its straightforward rationale, concerns about its effectiveness are mounting. What is the biological explanation for the "imprecision?" First, Precision Oncology relies on indiscriminate sequencing of genomes in biopsies that barely represent the heterogeneous mix of tumor cells. Second, findings that defy the orthodoxy of oncogenic "driver mutations" are now accumulating: the ubiquitous presence of oncogenic mutations in silent premalignancies or the dynamic switching without mutations between various cell phenotypes that promote progression. Most troublesome is the observation that cancer cells that survive treatment still will have suffered cytotoxic stress and thereby enter a stem cell-like state, the seeds for recurrence. The benefit of "precision targeting" of mutations is inherently limited by this counterproductive effect. These findings confirm that there is no precise linear causal relationship between tumor genotype and phenotype, a reminder of logician Carveth Read's caution that being vaguely right may be preferable to being precisely wrong. An open-minded embrace of the latest inconvenient findings indicating nongenetic and "imprecise" phenotype dynamics of tumors as summarized in this review will be paramount if Precision Oncology is ultimately to lead to clinical benefits. Cancer Res; 77(23); 6473-9. ©2017 AACR . ©2017 American Association for Cancer Research.

  9. Numerical precision control and GRACE

    International Nuclear Information System (INIS)

    Fujimoto, J.; Hamaguchi, N.; Ishikawa, T.; Kaneko, T.; Morita, H.; Perret-Gallix, D.; Tokura, A.; Shimizu, Y.

    2006-01-01

    The control of the numerical precision of large-scale computations like those generated by the GRACE system for automatic Feynman diagram calculations has become an intrinsic part of those packages. Recently, Hitachi Ltd. has developed in FORTRAN a new library HMLIB for quadruple and octuple precision arithmetic where the number of lost-bits is made available. This library has been tested with success on the 1-loop radiative correction to e + e - ->e + e - τ + τ - . It is shown that the approach followed by HMLIB provides an efficient way to track down the source of numerical significance losses and to deliver high-precision results yet minimizing computing time

  10. Big Data's Role in Precision Public Health.

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts.

  11. A new, simple and precise method for measuring cyclotron proton beam energies using the activity vs. depth profile of zinc-65 in a thick target of stacked copper foils

    International Nuclear Information System (INIS)

    Asad, A.H.; Chan, S.; Cryer, D.; Burrage, J.W.; Siddiqui, S.A.; Price, R.I.

    2015-01-01

    The proton beam energy of an isochronous 18 MeV cyclotron was determined using a novel version of the stacked copper-foils technique. This simple method used stacked foils of natural copper forming ‘thick’ targets to produce Zn radioisotopes by the well-documented (p,x) monitor-reactions. Primary beam energy was calculated using the "6"5Zn activity vs. depth profile in the target, with the results obtained using "6"2Zn and "6"3Zn (as comparators) in close agreement. Results from separate measurements using foil thicknesses of 100, 75, 50 or 25 µm to form the stacks also concurred closely. Energy was determined by iterative least-squares comparison of the normalized measured activity profile in a target-stack with the equivalent calculated normalized profile, using ‘energy’ as the regression variable. The technique exploits the uniqueness of the shape of the activity vs. depth profile of the monitor isotope in the target stack for a specified incident energy. The energy using "6"5Zn activity profiles and 50-μm foils alone was 18.03±0.02 [SD] MeV (95%CI=17.98–18.08), and 18.06±0.12 MeV (95%CI=18.02–18.10; NS) when combining results from all isotopes and foil thicknesses. When the beam energy was re-measured using "6"5Zn and 50-μm foils only, following a major upgrade of the ion sources and nonmagnetic beam controls the results were 18.11±0.05 MeV (95%CI=18.00–18.23; NS compared with ‘before’). Since measurement of only one Zn monitor isotope is required to determine the normalized activity profile this indirect yet precise technique does not require a direct beam-current measurement or a gamma-spectroscopy efficiency calibrated with standard sources, though a characteristic photopeak must be identified. It has some advantages over published methods using the ratio of cross sections of monitor reactions, including the ability to determine energies across a broader range and without need for customized beam degraders. - Highlights: • Simple

  12. Optimization of an Indirect Enzymatic Method for the Simultaneous Analysis of 3-MCPD, 2-MCPD, and Glycidyl Esters in Edible Oils.

    Science.gov (United States)

    Koyama, Kazuo; Miyazaki, Kinuko; Abe, Kousuke; Ikuta, Keiich; Egawa, Yoshitsugu; Kitta, Tadashi; Kido, Hirotsugu; Sano, Takashi; Takahashi, Yukinari; Nezu, Toru; Nohara, Hidenori; Miyashita, Takashi; Yada, Hiroshi; Yamazaki, Kumiko; Watanabe, Yomi

    2015-01-01

    We developed a novel, indirect enzymatic method for the analysis of fatty acid esters of 3-monochloro-1,2-propanediol (3-MCPD), 2-monochloro-1,3-propanediol (2-MCPD), and glycidol (Gly) in edible oils and fats. Using this method, the ester analytes were rapidly cleavaged by Candida rugosa lipase at room temperature for 0.5 h. As a result of the simultaneous hydrolysis and bromination steps, 3-MCPD esters, 2-MCPD esters, and glycidyl esters were converted to free 3-MCPD, 2-MCPD, and 3-monobromo-1,2-propanediol (3-MBPD), respectively. After the addition of internal standards, the mixtures were washed with hexane, derivatized with phenylboronic acid, and analyzed by gas chromatography-mass spectrometer (GC-MS). The analytical method was evaluated in preliminary and feasibility studies performed by 13 laboratories. The preliminary study from 4 laboratories showed the reproducibility (RSD R ) of 3-MCPD and 2-MCPD in extra virgin olive (EVO) oil, semi-solid palm oil, and solid palm oil. However, the RSDR and recoveries of Gly in the palm oil samples were not satisfactory. The Gly content of refrigerated palm oil samples decreased whereas the samples at room temperature were stable for three months, and this may be due to the depletion of Gly during cold storage. The feasibility studies performed by all 13 laboratories were conducted based on modifications of the shaking conditions for ester cleavage, the conditions of Gly bromination, and the removal of gel formed by residual lipase. Satisfactory RSDR were obtained for EVO oil samples spiked with standard esters (4.4% for 3-MCPD, 11.2% for 2-MCPD, and 6.6% for Gly).

  13. Lung Cancer Precision Medicine Trials

    Science.gov (United States)

    Patients with lung cancer are benefiting from the boom in targeted and immune-based therapies. With a series of precision medicine trials, NCI is keeping pace with the rapidly changing treatment landscape for lung cancer.

  14. Precision engineering: an evolutionary perspective.

    Science.gov (United States)

    Evans, Chris J

    2012-08-28

    Precision engineering is a relatively new name for a technology with roots going back over a thousand years; those roots span astronomy, metrology, fundamental standards, manufacturing and money-making (literally). Throughout that history, precision engineers have created links across disparate disciplines to generate innovative responses to society's needs and wants. This review combines historical and technological perspectives to illuminate precision engineering's current character and directions. It first provides us a working definition of precision engineering and then reviews the subject's roots. Examples will be given showing the contributions of the technology to society, while simultaneously showing the creative tension between the technological convergence that spurs new directions and the vertical disintegration that optimizes manufacturing economics.

  15. How GNSS Enables Precision Farming

    Science.gov (United States)

    2014-12-01

    Precision farming: Feeding a Growing Population Enables Those Who Feed the World. Immediate and Ongoing Needs - population growth (more to feed) - urbanization (decrease in arable land) Double food production by 2050 to meet world demand. To meet thi...

  16. Artificial intelligence, physiological genomics, and precision medicine.

    Science.gov (United States)

    Williams, Anna Marie; Liu, Yong; Regner, Kevin R; Jotterand, Fabrice; Liu, Pengyuan; Liang, Mingyu

    2018-04-01

    Big data are a major driver in the development of precision medicine. Efficient analysis methods are needed to transform big data into clinically-actionable knowledge. To accomplish this, many researchers are turning toward machine learning (ML), an approach of artificial intelligence (AI) that utilizes modern algorithms to give computers the ability to learn. Much of the effort to advance ML for precision medicine has been focused on the development and implementation of algorithms and the generation of ever larger quantities of genomic sequence data and electronic health records. However, relevance and accuracy of the data are as important as quantity of data in the advancement of ML for precision medicine. For common diseases, physiological genomic readouts in disease-applicable tissues may be an effective surrogate to measure the effect of genetic and environmental factors and their interactions that underlie disease development and progression. Disease-applicable tissue may be difficult to obtain, but there are important exceptions such as kidney needle biopsy specimens. As AI continues to advance, new analytical approaches, including those that go beyond data correlation, need to be developed and ethical issues of AI need to be addressed. Physiological genomic readouts in disease-relevant tissues, combined with advanced AI, can be a powerful approach for precision medicine for common diseases.

  17. Precision Learning Assessment: An Alternative to Traditional Assessment Techniques.

    Science.gov (United States)

    Caltagirone, Paul J.; Glover, Christopher E.

    1985-01-01

    A continuous and curriculum-based assessment method, Precision Learning Assessment (PLA), which integrates precision teaching and norm-referenced techniques, was applied to a math computation curriculum for 214 third graders. The resulting districtwide learning curves defining average annual progress through the computation curriculum provided…

  18. PRECISE - pregabalin in addition to usual care: Statistical analysis plan

    NARCIS (Netherlands)

    S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)

    2016-01-01

    textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double

  19. High precision anatomy for MEG.

    Science.gov (United States)

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. © 2013. Published by Elsevier Inc. All rights reserved.

  20. High precision anatomy for MEG☆

    Science.gov (United States)

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  1. Nanomaterials for Cancer Precision Medicine.

    Science.gov (United States)

    Wang, Yilong; Sun, Shuyang; Zhang, Zhiyuan; Shi, Donglu

    2018-04-01

    Medical science has recently advanced to the point where diagnosis and therapeutics can be carried out with high precision, even at the molecular level. A new field of "precision medicine" has consequently emerged with specific clinical implications and challenges that can be well-addressed by newly developed nanomaterials. Here, a nanoscience approach to precision medicine is provided, with a focus on cancer therapy, based on a new concept of "molecularly-defined cancers." "Next-generation sequencing" is introduced to identify the oncogene that is responsible for a class of cancers. This new approach is fundamentally different from all conventional cancer therapies that rely on diagnosis of the anatomic origins where the tumors are found. To treat cancers at molecular level, a recently developed "microRNA replacement therapy" is applied, utilizing nanocarriers, in order to regulate the driver oncogene, which is the core of cancer precision therapeutics. Furthermore, the outcome of the nanomediated oncogenic regulation has to be accurately assessed by the genetically characterized, patient-derived xenograft models. Cancer therapy in this fashion is a quintessential example of precision medicine, presenting many challenges to the materials communities with new issues in structural design, surface functionalization, gene/drug storage and delivery, cell targeting, and medical imaging. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Precision Medicine and Men's Health.

    Science.gov (United States)

    Mata, Douglas A; Katchi, Farhan M; Ramasamy, Ranjith

    2017-07-01

    Precision medicine can greatly benefit men's health by helping to prevent, diagnose, and treat prostate cancer, benign prostatic hyperplasia, infertility, hypogonadism, and erectile dysfunction. For example, precision medicine can facilitate the selection of men at high risk for prostate cancer for targeted prostate-specific antigen screening and chemoprevention administration, as well as assist in identifying men who are resistant to medical therapy for prostatic hyperplasia, who may instead require surgery. Precision medicine-trained clinicians can also let couples know whether their specific cause of infertility should be bypassed by sperm extraction and in vitro fertilization to prevent abnormalities in their offspring. Though precision medicine's role in the management of hypogonadism has yet to be defined, it could be used to identify biomarkers associated with individual patients' responses to treatment so that appropriate therapy can be prescribed. Last, precision medicine can improve erectile dysfunction treatment by identifying genetic polymorphisms that regulate response to medical therapies and by aiding in the selection of patients for further cardiovascular disease screening.

  3. Precision Medicine in Gastrointestinal Pathology.

    Science.gov (United States)

    Wang, David H; Park, Jason Y

    2016-05-01

    -Precision medicine is the promise of individualized therapy and management of patients based on their personal biology. There are now multiple global initiatives to perform whole-genome sequencing on millions of individuals. In the United States, an early program was the Million Veteran Program, and a more recent proposal in 2015 by the president of the United States is the Precision Medicine Initiative. To implement precision medicine in routine oncology care, genetic variants present in tumors need to be matched with effective clinical therapeutics. When we focus on the current state of precision medicine for gastrointestinal malignancies, it becomes apparent that there is a mixed history of success and failure. -To present the current state of precision medicine using gastrointestinal oncology as a model. We will present currently available targeted therapeutics, promising new findings in clinical genomic oncology, remaining quality issues in genomic testing, and emerging oncology clinical trial designs. -Review of the literature including clinical genomic studies on gastrointestinal malignancies, clinical oncology trials on therapeutics targeted to molecular alterations, and emerging clinical oncology study designs. -Translating our ability to sequence thousands of genes into meaningful improvements in patient survival will be the challenge for the next decade.

  4. Fiber Scrambling for High Precision Spectrographs

    Science.gov (United States)

    Kaplan, Zachary; Spronck, J. F. P.; Fischer, D.

    2011-05-01

    The detection of Earth-like exoplanets with the radial velocity method requires extreme Doppler precision and long-term stability in order to measure tiny reflex velocities in the host star. Recent planet searches have led to the detection of so called "super-Earths” (up to a few Earth masses) that induce radial velocity changes of about 1 m/s. However, the detection of true Earth analogs requires a precision of 10 cm/s. One of the largest factors limiting Doppler precision is variation in the Point Spread Function (PSF) from observation to observation due to changes in the illumination of the slit and spectrograph optics. Thus, this stability has become a focus of current instrumentation work. Fiber optics have been used since the 1980's to couple telescopes to high-precision spectrographs, initially for simpler mechanical design and control. However, fiber optics are also naturally efficient scramblers. Scrambling refers to a fiber's ability to produce an output beam independent of input. Our research is focused on characterizing the scrambling properties of several types of fibers, including circular, square and octagonal fibers. By measuring the intensity distribution after the fiber as a function of input beam position, we can simulate guiding errors that occur at an observatory. Through this, we can determine which fibers produce the most uniform outputs for the severest guiding errors, improving the PSF and allowing sub-m/s precision. However, extensive testing of fibers of supposedly identical core diameter, length and shape from the same manufacturer has revealed the "personality” of individual fibers. Personality describes differing intensity patterns for supposedly duplicate fibers illuminated identically. Here, we present our results on scrambling characterization as a function of fiber type, while studying individual fiber personality.

  5. Spike timing precision of neuronal circuits.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2018-04-17

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  6. Sensing technologies for precision irrigation

    CERN Document Server

    Ćulibrk, Dubravko; Minic, Vladan; Alonso Fernandez, Marta; Alvarez Osuna, Javier; Crnojevic, Vladimir

    2014-01-01

    This brief provides an overview of state-of-the-art sensing technologies relevant to the problem of precision irrigation, an emerging field within the domain of precision agriculture. Applications of wireless sensor networks, satellite data and geographic information systems in the domain are covered. This brief presents the basic concepts of the technologies and emphasizes the practical aspects that enable the implementation of intelligent irrigation systems. The authors target a broad audience interested in this theme and organize the content in five chapters, each concerned with a specific technology needed to address the problem of optimal crop irrigation. Professionals and researchers will find the text a thorough survey with practical applications.

  7. Precision measurement with atom interferometry

    International Nuclear Information System (INIS)

    Wang Jin

    2015-01-01

    Development of atom interferometry and its application in precision measurement are reviewed in this paper. The principle, features and the implementation of atom interferometers are introduced, the recent progress of precision measurement with atom interferometry, including determination of gravitational constant and fine structure constant, measurement of gravity, gravity gradient and rotation, test of weak equivalence principle, proposal of gravitational wave detection, and measurement of quadratic Zeeman shift are reviewed in detail. Determination of gravitational redshift, new definition of kilogram, and measurement of weak force with atom interferometry are also briefly introduced. (topical review)

  8. ELECTROWEAK PHYSICS AND PRECISION STUDIES

    International Nuclear Information System (INIS)

    MARCIANO, W.

    2005-01-01

    The utility of precision electroweak measurements for predicting the Standard Model Higgs mass via quantum loop effects is discussed. Current values of m W , sin 2 θ W (m Z ) # ovr MS# and m t imply a relatively light Higgs which is below the direct experimental bound but possibly consistent with Supersymmetry expectations. The existence of Supersymmetry is further suggested by a 2σ discrepancy between experiment and theory for the muon anomalous magnetic moment. Constraints from precision studies on other types of ''New Physics'' are also briefly described

  9. Universal precision sine bar attachment

    Science.gov (United States)

    Mann, Franklin D. (Inventor)

    1989-01-01

    This invention relates to an attachment for a sine bar which can be used to perform measurements during lathe operations or other types of machining operations. The attachment can be used for setting precision angles on vises, dividing heads, rotary tables and angle plates. It can also be used in the inspection of machined parts, when close tolerances are required, and in the layout of precision hardware. The novelty of the invention is believed to reside in a specific versatile sine bar attachment for measuring a variety of angles on a number of different types of equipment.

  10. STANFORD (SLAC): Precision electroweak result

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    Precision testing of the electroweak sector of the Standard Model has intensified with the recent publication* of results from the SLD collaboration's 1993 run on the Stanford Linear Collider, SLC. Using a highly polarized electron beam colliding with an unpolarized positron beam, SLD physicists measured the left-right asymmetry at the Z boson resonance with dramatically improved accuracy over 1992

  11. Spin and precision electroweak physics

    Energy Technology Data Exchange (ETDEWEB)

    Marciano, W.J. [Brookhaven National Lab., Upton, NY (United States)

    1994-12-01

    A perspective on fundamental parameters and precision tests of the Standard Model is given. Weak neutral current reactions are discussed with emphasis on those processes involving (polarized) electrons. The role of electroweak radiative corrections in determining the top quark mass and probing for {open_quotes}new physics{close_quotes} is described.

  12. Spin and precision electroweak physics

    International Nuclear Information System (INIS)

    Marciano, W.J.

    1993-01-01

    A perspective on fundamental parameters and precision tests of the Standard Model is given. Weak neutral current reactions are discussed with emphasis on those processes involving (polarized) electrons. The role of electroweak radiative corrections in determining the top quark mass and probing for ''new physics'' is described

  13. Precision surveying system for PEP

    International Nuclear Information System (INIS)

    Gunn, J.; Lauritzen, T.; Sah, R.; Pellisier, P.F.

    1977-01-01

    A semi-automatic precision surveying system is being developed for PEP. Reference elevations for vertical alignment will be provided by a liquid level. The short range surveying will be accomplished using a Laser Surveying System featuring automatic data acquisition and analysis

  14. Precision medicine at the crossroads.

    Science.gov (United States)

    Olson, Maynard V

    2017-10-11

    There are bioethical, institutional, economic, legal, and cultural obstacles to creating the robust-precompetitive-data resource that will be required to advance the vision of "precision medicine," the ability to use molecular data to target therapies to patients for whom they offer the most benefit at the least risk. Creation of such an "information commons" was the central recommendation of the 2011 report Toward Precision Medicine issued by a committee of the National Research Council of the USA (Committee on a Framework for Development of a New Taxonomy of Disease; National Research Council. Toward precision medicine: building a knowledge network for biomedical research and a new taxonomy of disease. 2011). In this commentary, I review the rationale for creating an information commons and the obstacles to doing so; then, I endorse a path forward based on the dynamic consent of research subjects interacting with researchers through trusted mediators. I assert that the advantages of the proposed system overwhelm alternative ways of handling data on the phenotypes, genotypes, and environmental exposures of individual humans; hence, I argue that its creation should be the central policy objective of early efforts to make precision medicine a reality.

  15. Proton gyromagnetic precision measurement system

    International Nuclear Information System (INIS)

    Zhu Deming; Deming Zhu

    1991-01-01

    A computerized control and measurement system used in the proton gyromagnetic precision meausrement is descirbed. It adopts the CAMAC data acquisition equipment, using on-line control and analysis with the HP85 and PDP-11/60 computer systems. It also adopts the RSX11M computer operation system, and the control software is written in FORTRAN language

  16. Distributed Control Architectures for Precision Spacecraft Formations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — LaunchPoint Technologies, Inc. (LaunchPoint) proposes to develop synthesis methods and design architectures for distributed control systems in precision spacecraft...

  17. Estimating maneuvers for precise relative orbit determination using GPS

    Science.gov (United States)

    Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs

    2017-01-01

    Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.

  18. Precise Point Positioning with Partial Ambiguity Fixing.

    Science.gov (United States)

    Li, Pan; Zhang, Xiaohong

    2015-06-10

    Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.

  19. Precise Documentation: The Key to Better Software

    Science.gov (United States)

    Parnas, David Lorge

    The prime cause of the sorry “state of the art” in software development is our failure to produce good design documentation. Poor documentation is the cause of many errors and reduces efficiency in every phase of a software product's development and use. Most software developers believe that “documentation” refers to a collection of wordy, unstructured, introductory descriptions, thousands of pages that nobody wanted to write and nobody trusts. In contrast, Engineers in more traditional disciplines think of precise blueprints, circuit diagrams, and mathematical specifications of component properties. Software developers do not know how to produce precise documents for software. Software developments also think that documentation is something written after the software has been developed. In other fields of Engineering much of the documentation is written before and during the development. It represents forethought not afterthought. Among the benefits of better documentation would be: easier reuse of old designs, better communication about requirements, more useful design reviews, easier integration of separately written modules, more effective code inspection, more effective testing, and more efficient corrections and improvements. This paper explains how to produce and use precise software documentation and illustrate the methods with several examples.

  20. Joint Estimation of Multiple Precision Matrices with Common Structures.

    Science.gov (United States)

    Lee, Wonyul; Liu, Yufeng

    Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained l 1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.

  1. Precision and accuracy in radiotherapy

    International Nuclear Information System (INIS)

    Brenner, J.D.

    1989-01-01

    The required precision due to random errors in the delivery of fractionated dose regime is considered. It is argued that suggestions that 1-3% precision is needed may be unnecessarily conservative. It is further suggested that random and systematic errors should not be combined with equal weight to yield an overall target uncertainty in dose delivery, systematic errors being of greater significance. The authors conclude that imprecise dose delivery and inaccurate dose delivery affect patient-cure results differently. Whereas, for example, a 10% inaccuracy in dose delivery would be quite catastrophic in the case considered here, a corresponding imprecision would have a much smaller effect on overall success rates. (author). 14 refs.; 2 figs

  2. Precision electroweak physics at LEP

    Energy Technology Data Exchange (ETDEWEB)

    Mannelli, M.

    1994-12-01

    Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosen for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.

  3. Precise object tracking under deformation

    International Nuclear Information System (INIS)

    Saad, M.H

    2010-01-01

    The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This frame-work focuses on the precise object tracking under deformation such as scaling , rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results.

  4. High precision 3D coordinates location technology for pellet

    International Nuclear Information System (INIS)

    Fan Yong; Zhang Jiacheng; Zhou Jingbin; Tang Jun; Xiao Decheng; Wang Chuanke; Dong Jianjun

    2010-01-01

    In inertial confinement fusion (ICF) system, manual way has been used to collimate the pellet traditionally, which is time-consuming and low-level automated. A new method based on Binocular Vision is proposed, which can place the prospecting apparatus on the public diagnosis platform to reach relevant engineering target and uses the high precision two dimension calibration board. Iterative method is adopted to satisfy 0.1 pixel for corner extraction precision. Furthermore, SVD decomposition is used to remove the singularity corners and advanced Zhang's calibration method is applied to promote camera calibration precision. Experiments indicate that the RMS of three dimension coordinate measurement precision is 25 μm, and the max system RMS of distance measurement is better than 100 μm, satisfying the system index requirement. (authors)

  5. Fit to Electroweak Precision Data

    International Nuclear Information System (INIS)

    Erler, Jens

    2006-01-01

    A brief review of electroweak precision data from LEP, SLC, the Tevatron, and low energies is presented. The global fit to all data including the most recent results on the masses of the top quark and the W boson reinforces the preference for a relatively light Higgs boson. I will also give an outlook on future developments at the Tevatron Run II, CEBAF, the LHC, and the ILC

  6. Precise Object Tracking under Deformation

    International Nuclear Information System (INIS)

    Saad, M.H.

    2010-01-01

    The precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high precession results. xiiiThe precise object tracking is an essential issue in several serious applications such as; robot vision, automated surveillance (civil and military), inspection, biomedical image analysis, video coding, motion segmentation, human-machine interface, visualization, medical imaging, traffic systems, satellite imaging etc. This framework focuses on the precise object tracking under deformation such as scaling, rotation, noise, blurring and change of illumination. This research is a trail to solve these serious problems in visual object tracking by which the quality of the overall system will be improved. Developing a three dimensional (3D) geometrical model to determine the current pose of an object and predict its future location based on FIR model learned by the OLS. This framework presents a robust ranging technique to track a visual target instead of the traditional expensive ranging sensors. The presented research work is applied to real video stream and achieved high

  7. Precision measurements of electroweak parameters

    CERN Document Server

    Savin, Alexander

    2017-01-01

    A set of selected precise measurements of the SM parameters from the LHC experiments is discussed. Results on W-mass measurement and forward-backward asymmetry in production of the Drell--Yan events in both dielectron and dimuon decay channels are presented together with results on the effective mixing angle measurements. Electroweak production of the vector bosons in association with two jets is discussed.

  8. Precision titration mini-calorimeter

    International Nuclear Information System (INIS)

    Ensor, D.; Kullberg, L.; Choppin, G.

    1977-01-01

    The design and test of a small volume calorimeter of high precision and simple design is described. The calorimeter operates with solution sample volumes in the range of 3 to 5 ml. The results of experiments on the entropy changes for two standard reactions: (1) reaction of tris(hydroxymethyl)aminomethane with hydrochloric acid and (2) reaction between mercury(II) and bromide ions are reported to confirm the accuracy and overall performance of the calorimeter

  9. Knowledge of Precision Farming Beneficiaries

    Directory of Open Access Journals (Sweden)

    A.V. Greena

    2016-05-01

    Full Text Available Precision Farming is one of the many advanced farming practices that make production more efficient by better resource management and reducing wastage. TN-IAMWARM is a world bank funded project aims to improve the farm productivity and income through better water management. The present study was carried out in Kambainallur sub basin of Dharmapuri district with 120 TN-IAMWARM beneficiaries as respondents. The result indicated that more than three fourth (76.67 % of the respondents had high level of knowledge on precision farming technologies which was made possible by the implementation of TN-IAMWARM project. The study further revealed that educational status, occupational status and exposure to agricultural messages had a positive and significant contribution to the knowledge level of the respondents at 0.01 level of probability whereas experience in precision farming and social participation had a positive and significant contribution at 0.05 level of probability.

  10. A precise technique for manufacturing correction coil

    International Nuclear Information System (INIS)

    Schieber, L.

    1992-01-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire reg-sign technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC

  11. Environmental Testing for Precision Parts and Instruments

    International Nuclear Information System (INIS)

    Choi, Man Yong; Park, Jeong Hak; Yun, Kyu Tek

    2001-01-01

    Precision parts and instruments are tested to evaluate performance in development-process and product-step to prement a potential defect due to a failure design. In this paper, Environmental test technology, which is the basis of reliability analysis, is introduced with examples of test criterion, test method for products, encoder and traffic signal controller, and measuring instruments. Recently, as the importance of the environmental test technology is recognised. It is proposed that training of test technician and technology of jig design and failure analysis are very essential

  12. The Lanczos and Conjugate Gradient Algorithms in Finite Precision Arithmetic

    Czech Academy of Sciences Publication Activity Database

    Meurant, G.; Strakoš, Zdeněk

    2006-01-01

    Roč. 15, - (2006), s. 471-542 ISSN 0962-4929 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : Lanczos method * conjugate gradient method * finite precision arithmetic * numerical stability * iterative methods Subject RIV: BA - General Mathematics

  13. Constraining supersymmetry with precision data

    International Nuclear Information System (INIS)

    Pierce, D.M.; Erler, J.

    1997-01-01

    We discuss the results of a global fit to precision data in supersymmetric models. We consider both gravity- and gauge-mediated models. As the superpartner spectrum becomes light, the global fit to the data typically results in larger values of χ 2 . We indicate the regions of parameter space which are excluded by the data. We discuss the additional effect of the B(B→X s γ) measurement. Our analysis excludes chargino masses below M Z in the simplest gauge-mediated model with μ>0, with stronger constraints for larger values of tanβ. copyright 1997 American Institute of Physics

  14. High precision Standard Model Physics

    International Nuclear Information System (INIS)

    Magnin, J.

    2009-01-01

    The main goal of the LHCb experiment, one of the four large experiments of the Large Hadron Collider, is to try to give answers to the question of why Nature prefers matter over antimatter? This will be done by studying the decay of b quarks and their antimatter partners, b-bar, which will be produced by billions in 14 TeV p-p collisions by the LHC. In addition, as 'beauty' particles mainly decay in charm particles, an interesting program of charm physics will be carried on, allowing to measure quantities as for instance the D 0 -D-bar 0 mixing, with incredible precision.

  15. Electroweak precision measurements in CMS

    CERN Document Server

    Dordevic, Milos

    2017-01-01

    An overview of recent results on electroweak precision measurements from the CMS Collaboration is presented. Studies of the weak boson differential transverse momentum spectra, Z boson angular coefficients, forward-backward asymmetry of Drell-Yan lepton pairs and charge asymmetry of W boson production are made in comparison to the state-of-the-art Monte Carlo generators and theoretical predictions. The results show a good agreement with the Standard Model. As a proof of principle for future W mass measurements, a W-like analysis of the Z boson mass is performed.

  16. Precision proton spectrometers for CMS

    CERN Document Server

    Albrow, Michael

    2013-01-01

    We plan to add high precision tracking- and timing-detectors at z = +/- 240 m to CMS to study exclusive processes p + p -- p + X + p at high luminosity. This enables the LHC to be used as a tagged photon-photon collider, with X = l+l- and W+W-, and as a "tagged" gluon-gluon collider (with a spectator gluon) for QCD studies with jets. A second stage at z = 240 m would allow observations of exclusive Higgs boson production.

  17. Precise Analysis of String Expressions

    DEFF Research Database (Denmark)

    Christensen, Aske Simon; Møller, Anders; Schwartzbach, Michael Ignatieff

    2003-01-01

    We perform static analysis of Java programs to answer a simple question: which values may occur as results of string expressions? The answers are summarized for each expression by a regular language that is guaranteed to contain all possible values. We present several applications of this analysis...... are automatically produced. We present extensive benchmarks demonstrating that the analysis is efficient and produces results of useful precision......., including statically checking the syntax of dynamically generated expressions, such as SQL queries. Our analysis constructs flow graphs from class files and generates a context-free grammar with a nonterminal for each string expression. The language of this grammar is then widened into a regular language...

  18. The Many Faces of Precision

    Directory of Open Access Journals (Sweden)

    Andy eClark

    2013-05-01

    Full Text Available An appreciation of the many roles of ‘precision-weighting’ (upping the gain on select populations of prediction error units opens the door to better accounts of planning and ‘offline simulation’, makes suggestive contact with large bodies of work on embodied and situated cognition, and offers new perspectives on the ‘active brain’. Combined with the complex affordances of language and culture, and operating against the essential backdrop of a variety of more biologically basic ploys and stratagems, the result is a maximally context-sensitive, restless, constantly self-reconfiguring architecture.

  19. Thin films for precision optics

    International Nuclear Information System (INIS)

    Araujo, J.F.; Maurici, N.; Castro, J.C. de

    1983-01-01

    The technology of producing dielectric and/or metallic thin films for high precision optical components is discussed. Computer programs were developed in order to calculate and register, graphically, reflectance and transmittance spectra of multi-layer films. The technology of vacuum evaporation of several materials was implemented in our thin-films laboratory; various films for optics were then developed. The possibility of first calculate film characteristics and then produce the film is of great advantage since it reduces the time required to produce a new type of film and also reduces the cost of the project. (C.L.B.) [pt

  20. Usefulness of Models in Precision Nutrient Management

    DEFF Research Database (Denmark)

    Plauborg, Finn; Manevski, Kiril; Zhenjiang, Zhou

    Modern agriculture increasingly applies new methods and technologies to increase production and nutrient use efficiencies and at the same time reduce leaching of nutrients and greenhouse gas emissions. GPS based ECa-measurement equipment, ER or EM instrumentations, are used to spatially character......Modern agriculture increasingly applies new methods and technologies to increase production and nutrient use efficiencies and at the same time reduce leaching of nutrients and greenhouse gas emissions. GPS based ECa-measurement equipment, ER or EM instrumentations, are used to spatially...... and mineral composition. Mapping of crop status and the spatial-temporal variability within fields with red-infrared reflection are used to support decision on split fertilisation and more precise dosing. The interpretation and use of these various data in precise nutrient management is not straightforward...... of mineralisation. However, whether the crop would benefit from this depended to a large extent on soil hydraulic conductivity within the range of natural variation when testing the model. In addition the initialisation of the distribution of soil total carbon and nitrogen into conceptual model compartments...

  1. Laser precision microfabrication in Japan

    Science.gov (United States)

    Miyamoto, Isamu; Ooie, Toshihiko; Takeno, Shozui

    2000-11-01

    Electronic devices such as handy phones and micro computers have been rapidly expanding their market recent years due to their enhanced performance, down sizing and cost down. This has been realized by the innovation in the precision micro- fabrication technology of semiconductors and printed wiring circuit boards (PWB) where laser technologies such as lithography, drilling, trimming, welding and soldering play an important role. In phot lithography, for instance, KrF excimer lasers having a resolution of 0.18 micrometers has been used in production instead of mercury lamp. Laser drilling of PWB has been increased up to over 1000 holes per second, and approximately 800 laser drilling systems of PWB are expected to be delivered in the world market this year, and most of these laser processing systems are manufactured in Japan. Trend of laser micro-fabrication in Japanese industry is described along with recent topics of R&D, government supported project and future tasks of industrial laser precision micro-fabrication on the basis of the survey conducted by Japan laser Processing Society.

  2. Precision experiments in electroweak interactions

    International Nuclear Information System (INIS)

    Swartz, M.L.

    1990-03-01

    The electroweak theory of Glashow, Weinberg, and Salam (GWS) has become one of the twin pillars upon which our understanding of all particle physics phenomena rests. It is a brilliant achievement that qualitatively and quantitatively describes all of the vast quantity of experimental data that have been accumulated over some forty years. Note that the word quantitatively must be qualified. The low energy limiting cases of the GWS theory, Quantum Electrodynamics and the V-A Theory of Weak Interactions, have withstood rigorous testing. The high energy synthesis of these ideas, the GWS theory, has not yet been subjected to comparably precise scrutiny. The recent operation of a new generation of proton-antiproton (p bar p) and electron-positron (e + e - ) colliders has made it possible to produce and study large samples of the electroweak gauge bosons W ± and Z 0 . We expect that these facilities will enable very precise tests of the GWS theory to be performed in the near future. In keeping with the theme of this Institute, Physics at the 100 GeV Mass Scale, these lectures will explore the current status and the near-future prospects of these experiments

  3. Laser fusion and precision engineering

    International Nuclear Information System (INIS)

    Nakai, Sadao

    1989-01-01

    The development of laser nuclear fusion energy for attaining the self supply of energy in Japan and establishing the future perspective as the nation is based in the wide fields of high level science and technology. Therefore to its promotion, large expectation is placed as the powerful traction for the development of creative science and technology which are particularly necessary in Japan. The research on laser nuclear fusion advances steadily in the elucidation of the physics of pellet implosion which is its basic concept and compressed plasma parameters. In September, 1986, the number of neutron generation 10 13 , and in October, 1988, the high density compression 600 times as high as solid density have been achieved. Based on these results, now the laser nuclear fusion is in the situation to begin the attainment of ignition condition for nuclear fusion and the realization of break even. The optical components, high power laser technology, fuel pellet production, high resolution measurement, the simulation of implosion using a supercomputer and so on are closely related to precision engineering. In this report, the mechanism of laser nuclear fusion, the present status of its research, and the basic technologies and precision engineering are described. (K.I.)

  4. Validating precision--how many measurements do we need?

    Science.gov (United States)

    ÅSberg, Arne; Solem, Kristine Bodal; Mikkelsen, Gustav

    2015-10-01

    A quantitative analytical method should be sufficiently precise, i.e. the imprecision measured as a standard deviation should be less than the numerical definition of the acceptable standard deviation. We propose that the entire 90% confidence interval for the true standard deviation shall lie below the numerical definition of the acceptable standard deviation in order to assure that the analytical method is sufficiently precise. We also present power function curves to ease the decision on the number of measurements to make. Computer simulation was used to calculate the probability that the upper limit of the 90% confidence interval for the true standard deviation was equal to or exceeded the acceptable standard deviation. Power function curves were constructed for different scenarios. The probability of failure to assure that the method is sufficiently precise increases with decreasing number of measurements and with increasing standard deviation when the true standard deviation is well below the acceptable standard deviation. For instance, the probability of failure is 42% for a precision experiment of 40 repeated measurements in one analytical run and 7% for 100 repeated measurements, when the true standard deviation is 80% of the acceptable standard deviation. Compared to the CLSI guidelines, validating precision according to the proposed principle is more reliable, but demands considerably more measurements. Using power function curves may help when planning studies to validate precision.

  5. The Precision Field Lysimeter Concept

    Science.gov (United States)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  6. Precision pharmacology for Alzheimer's disease.

    Science.gov (United States)

    Hampel, Harald; Vergallo, Andrea; Aguilar, Lisi Flores; Benda, Norbert; Broich, Karl; Cuello, A Claudio; Cummings, Jeffrey; Dubois, Bruno; Federoff, Howard J; Fiandaca, Massimo; Genthon, Remy; Haberkamp, Marion; Karran, Eric; Mapstone, Mark; Perry, George; Schneider, Lon S; Welikovitch, Lindsay A; Woodcock, Janet; Baldacci, Filippo; Lista, Simone

    2018-04-01

    The complex multifactorial nature of polygenic Alzheimer's disease (AD) presents significant challenges for drug development. AD pathophysiology is progressing in a non-linear dynamic fashion across multiple systems levels - from molecules to organ systems - and through adaptation, to compensation, and decompensation to systems failure. Adaptation and compensation maintain homeostasis: a dynamic equilibrium resulting from the dynamic non-linear interaction between genome, epigenome, and environment. An individual vulnerability to stressors exists on the basis of individual triggers, drivers, and thresholds accounting for the initiation and failure of adaptive and compensatory responses. Consequently, the distinct pattern of AD pathophysiology in space and time must be investigated on the basis of the individual biological makeup. This requires the implementation of systems biology and neurophysiology to facilitate Precision Medicine (PM) and Precision Pharmacology (PP). The regulation of several processes at multiple levels of complexity from gene expression to cellular cycle to tissue repair and system-wide network activation has different time delays (temporal scale) according to the affected systems (spatial scale). The initial failure might originate and occur at every level potentially affecting the whole dynamic interrelated systems within an organism. Unraveling the spatial and temporal dynamics of non-linear pathophysiological mechanisms across the continuum of hierarchical self-organized systems levels and from systems homeostasis to systems failure is key to understand AD. Measuring and, possibly, controlling space- and time-scaled adaptive and compensatory responses occurring during AD will represent a crucial step to achieve the capacity to substantially modify the disease course and progression at the best suitable timepoints, thus counteracting disrupting critical pathophysiological inputs. This approach will provide the conceptual basis for effective

  7. High Precision GNSS Guidance for Field Mobile Robots

    Directory of Open Access Journals (Sweden)

    Ladislav Jurišica

    2012-11-01

    Full Text Available In this paper, we discuss GNSS (Global Navigation Satellite System guidance for field mobile robots. Several GNSS systems and receivers, as well as multiple measurement methods and principles of GNSS systems are examined. We focus mainly on sources of errors and investigate diverse approaches for precise measuring and effective use of GNSS systems for real-time robot localization. The main body of the article compares two GNSS receivers and their measurement methods. We design, implement and evaluate several mathematical methods for precise robot localization.

  8. GPS Precision Timing at CERN

    CERN Document Server

    Beetham, C G

    1999-01-01

    For the past decade, the Global Positioning System (GPS) has been used to provide precise time, frequency and position co-ordinates world-wide. Recently, equipment has become available specialising in providing extremely accurate timing information, referenced to Universal Time Co-ordinates (UTC). This feature has been used at CERN to provide time of day information for systems that have been installed in the Proton Synchrotron (PS), Super Proton Synchrotron (SPS) and the Large Electron Positron (LEP) machines. The different systems are described as well as the planned developments, particularly with respect to optical transmission and the Inter-Range Instrumentation Group IRIG-B standard, for future use in the Large Hadron Collider (LHC).

  9. Longitudinal interfacility precision in single-energy quantitative CT

    International Nuclear Information System (INIS)

    Morin, R.L.; Gray, J.E.; Wahner, H.W.; Weekes, R.G.

    1987-01-01

    The authors investigated the precision of single-energy quantitative CT measurements between two facilities over 3 months. An anthropomorphic phantom with calcium hydroxyapatite inserts (60,100, and 160 mg/cc) was used with the Cann-Gennant method to measure bone mineral density. The same model CT scanner, anthropomorphic phantom, quantitative CT standard and analysis package were utilized at each facility. Acquisition and analysis techniques were identical to those used in patient studies. At one facility, 28 measurements yielded an average precision of 6.1% (5.0%-8.5%). The average precision for 39 measurements at the other facility was 4.3% (3.2%-8.1%). Successive scans with phantom repositioning between scanning yielded an average precision of about 3% (1%-4% without repositioning). Despite differences in personnel, scanners, standards, and phantoms, the variation between facilities was about 2%, which was within the intrafacility variation of about 5% at each location

  10. Precision medicine for cancer with next-generation functional diagnostics.

    Science.gov (United States)

    Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T

    2015-12-01

    Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.

  11. Contacting nanowires and nanotubes with atomic precision for electronic transport

    KAUST Repository

    Qin, Shengyong; Hellstrom, Sondra; Bao, Zhenan; Boyanov, Boyan; Li, An-Ping

    2012-01-01

    Making contacts to nanostructures with atomic precision is an important process in the bottom-up fabrication and characterization of electronic nanodevices. Existing contacting techniques use top-down lithography and chemical etching, but lack atomic precision and introduce the possibility of contamination. Here, we report that a field-induced emission process can be used to make local contacts onto individual nanowires and nanotubes with atomic spatial precision. The gold nano-islands are deposited onto nanostructures precisely by using a scanning tunneling microscope tip, which provides a clean and controllable method to ensure both electrically conductive and mechanically reliable contacts. To demonstrate the wide applicability of the technique, nano-contacts are fabricated on silicide atomic wires, carbon nanotubes, and copper nanowires. The electrical transport measurements are performed in situ by utilizing the nanocontacts to bridge the nanostructures to the transport probes. © 2012 American Institute of Physics.

  12. French Meteor Network for High Precision Orbits of Meteoroids

    Science.gov (United States)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  13. Applications of an automated stem measurer for precision forestry

    Science.gov (United States)

    N. Clark

    2001-01-01

    Accurate stem measurements are required for the determination of many silvicultural prescriptions, i.e., what are we going to do with a stand of trees. This would only be amplified in a precision forestry context. Many methods have been proposed for optimal ways to evaluate stems for a variety of characteristics. These methods usually involve the acquisition of total...

  14. Precision and reproducibility in AMS radiocarbon measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Hotchkis, M A; Fink, D; Hua, Q; Jacobsen, G E; Lawson, E M; Smith, A M; Tuniz, C [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1997-12-31

    Accelerator Mass Spectrometry (AMS) is a technique by which rare radioisotopes such as {sup 14}C can be measured at environmental levels with high efficiency. Instead of detecting radioactivity, which is very weak for long-lived environmental radioisotopes, atoms are counted directly. The sample is placed in an ion source, from which a negative ion beam of the atoms of interest is extracted, mass analysed, and injected into a tandem accelerator. After stripping to positive charge states in the accelerator HV terminal, the ions are further accelerated, analysed with magnetic and electrostatic devices and counted in a detector. An isotopic ratio is derived from the number of radioisotope atoms counted in a given time and the beam current of a stable isotope of the same element, measured after the accelerator. For radiocarbon, {sup 14}C/{sup 13}C ratios are usually measured, and the ratio of an unknown sample is compared to that of a standard. The achievable precision for such ratio measurements is limited primarily by {sup 14}C counting statistics and also by a variety of factors related to accelerator and ion source stability. At the ANTARES AMS facility at Lucas Heights Research Laboratories we are currently able to measure {sup 14}C with 0.5% precision. In the two years since becoming operational, more than 1000 {sup 14}C samples have been measured. Recent improvements in precision for {sup 14}C have been achieved with the commissioning of a 59 sample ion source. The measurement system, from sample changing to data acquisition, is under common computer control. These developments have allowed a new regime of automated multi-sample processing which has impacted both on the system throughput and the measurement precision. We have developed data evaluation methods at ANTARES which cross-check the self-consistency of the statistical analysis of our data. Rigorous data evaluation is invaluable in assessing the true reproducibility of the measurement system and aids in

  15. Precision and reproducibility in AMS radiocarbon measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Hotchkis, M.A.; Fink, D.; Hua, Q.; Jacobsen, G.E.; Lawson, E. M.; Smith, A.M.; Tuniz, C. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1996-12-31

    Accelerator Mass Spectrometry (AMS) is a technique by which rare radioisotopes such as {sup 14}C can be measured at environmental levels with high efficiency. Instead of detecting radioactivity, which is very weak for long-lived environmental radioisotopes, atoms are counted directly. The sample is placed in an ion source, from which a negative ion beam of the atoms of interest is extracted, mass analysed, and injected into a tandem accelerator. After stripping to positive charge states in the accelerator HV terminal, the ions are further accelerated, analysed with magnetic and electrostatic devices and counted in a detector. An isotopic ratio is derived from the number of radioisotope atoms counted in a given time and the beam current of a stable isotope of the same element, measured after the accelerator. For radiocarbon, {sup 14}C/{sup 13}C ratios are usually measured, and the ratio of an unknown sample is compared to that of a standard. The achievable precision for such ratio measurements is limited primarily by {sup 14}C counting statistics and also by a variety of factors related to accelerator and ion source stability. At the ANTARES AMS facility at Lucas Heights Research Laboratories we are currently able to measure {sup 14}C with 0.5% precision. In the two years since becoming operational, more than 1000 {sup 14}C samples have been measured. Recent improvements in precision for {sup 14}C have been achieved with the commissioning of a 59 sample ion source. The measurement system, from sample changing to data acquisition, is under common computer control. These developments have allowed a new regime of automated multi-sample processing which has impacted both on the system throughput and the measurement precision. We have developed data evaluation methods at ANTARES which cross-check the self-consistency of the statistical analysis of our data. Rigorous data evaluation is invaluable in assessing the true reproducibility of the measurement system and aids in

  16. Development of sensor guided precision sprayers

    NARCIS (Netherlands)

    Nieuwenhuizen, A.T.; Zande, van de J.C.

    2013-01-01

    Sensor guided precision sprayers were developed to automate the spray process with a focus on emission reduction and identical or increased efficacy, with the precision agriculture concept in mind. Within the project “Innovations2” sensor guided precision sprayers were introduced to leek,

  17. Glass ceramic ZERODUR enabling nanometer precision

    Science.gov (United States)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  18. Ultra-wideband ranging precision and accuracy

    International Nuclear Information System (INIS)

    MacGougan, Glenn; O'Keefe, Kyle; Klukas, Richard

    2009-01-01

    This paper provides an overview of ultra-wideband (UWB) in the context of ranging applications and assesses the precision and accuracy of UWB ranging from both a theoretical perspective and a practical perspective using real data. The paper begins with a brief history of UWB technology and the most current definition of what constitutes an UWB signal. The potential precision of UWB ranging is assessed using Cramer–Rao lower bound analysis. UWB ranging methods are described and potential error sources are discussed. Two types of commercially available UWB ranging radios are introduced which are used in testing. Actual ranging accuracy is assessed from line-of-sight testing under benign signal conditions by comparison to high-accuracy electronic distance measurements and to ranges derived from GPS real-time kinematic positioning. Range measurements obtained in outdoor testing with line-of-sight obstructions and strong reflection sources are compared to ranges derived from classically surveyed positions. The paper concludes with a discussion of the potential applications for UWB ranging

  19. High precision timing in a FLASH

    Energy Technology Data Exchange (ETDEWEB)

    Hoek, Matthias; Cardinali, Matteo; Dickescheid, Michael; Schlimme, Soeren; Sfienti, Concettina; Spruck, Bjoern; Thiel, Michaela [Institut fuer Kernphysik, Johannes Gutenberg-Universitaet Mainz (Germany)

    2016-07-01

    A segmented highly precise start counter (FLASH) was designed and constructed at the Institute for Nuclear Physics in Mainz. Besides determining a precise reference time, a Time-of-Flight measurement can be performed with two identical FLASH units. Thus, particle identification can be provided for mixed hadron beam environments. The detector design is based on the detection of Cherenkov light produced in fused silica radiator bars with fast multi-anode MCP-PMTs. The segmentation of the radiator improves the timing resolution while allowing a coarse position resolution along one direction. Both, the arrival time and the Time-over-Threshold are determined by the readout electronics, which enables walk correction of the arrival time. The performance of two FLASH units was investigated in test experiments at the Mainz Microton (MAMI) using an electron beam with an energy of 855 MeV and at CERN's PS T9 beam line with a mixed hadron beam with momenta between 3-8 GeV/c. Effective Time-walk correction methods based on Time-over-Threshold were developed for the data analysis. The achieved Time-Of-Flight resolution after applying all corrections was found to be 70 ps. Furthermore, the PID and position resolution capabilities are discussed in this contribution.

  20. Precision of hyaline cartilage thickness measurements

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, K.; Buckwalter, K.; Helvie, M.; Niklason, L.; Martel, W. (Univ. of Michigan Hospitals, Ann Arbor, MI (United States). Dept. of Radiology)

    1992-05-01

    Measurement of cartilage thickness in vivo is an important indicator of the status of a joint as the various degenerative and inflammatory arthritides directly affect the condition of the cartilage. In order to assess the precision of thickness measurements of hyaline articular cartilage, we undertook a pilot study using MR imaging, plain radiography, and ultrasonography (US). We measured the cartilage of the hip and knee joints in 10 persons (4 healthy volunteers and 6 patients). The joints in each patient were examined on two separate occasions using each modality. In the hips a swell as the knee joints, the most precise measuring method was plain film radiography. For radiographs of the knees obtained in the standing position, the coefficient of variation was 6.5%; in the hips this figure was 6.34%. US of the knees and MR imaging of the hips were the second best modalities in the measurement of cartilage thickness. In addition, MR imaging enabled the most complete visualization of the joint cartilage. (orig.).

  1. Precision measurements with atom interferometry

    Science.gov (United States)

    Schubert, Christian; Abend, Sven; Schlippert, Dennis; Ertmer, Wolfgang; Rasel, Ernst M.

    2017-04-01

    Interferometry with matter waves enables precise measurements of rotations, accelerations, and differential accelerations [1-5]. This is exploited for determining fundamental constants [2], in fundamental science as e.g. testing the universality of free fall [3], and is applied for gravimetry [4], and gravity gradiometry [2,5]. At the Institut für Quantenoptik in Hannover, different approaches are pursued. A large scale device is designed and currently being set up to investigate the gain in precision for gravimetry, gradiometry, and fundamental tests on large baselines [6]. For field applications, a compact and transportable device is being developed. Its key feature is an atom chip source providing a collimated high flux of atoms which is expected to mitigate systematic uncertainties [7,8]. The atom chip technology and miniaturization benefits from microgravity experiments in the drop tower in Bremen and sounding rocket experiments [8,9] which act as pathfinders for space borne operation [10]. This contribution will report about our recent results. The presented work is supported by the CRC 1227 DQ-mat, the CRC 1128 geo-Q, the RTG 1729, the QUEST-LFS, and by the German Space Agency (DLR) with funds provided by the Federal Ministry of Economic Affairs and Energy (BMWi) due to an enactment of the German Bundestag under Grant No. DLR 50WM1552-1557. [1] P. Berg et al., Phys. Rev. Lett., 114, 063002, 2015; I. Dutta et al., Phys. Rev. Lett., 116, 183003, 2016. [2] J. B. Fixler et al., Science 315, 74 (2007); G. Rosi et al., Nature 510, 518, 2014. [3] D. Schlippert et al., Phys. Rev. Lett., 112, 203002, 2014. [4] A. Peters et al., Nature 400, 849, 1999; A. Louchet-Chauvet et al., New J. Phys. 13, 065026, 2011; C. Freier et al., J. of Phys.: Conf. Series 723, 012050, 2016. [5] J. M. McGuirk et al., Phys. Rev. A 65, 033608, 2002; P. Asenbaum et al., arXiv:1610.03832. [6] J. Hartwig et al., New J. Phys. 17, 035011, 2015. [7] H. Ahlers et al., Phys. Rev. Lett. 116, 173601

  2. Precision is in their nature

    CERN Multimedia

    Antonella Del Rosso

    2014-01-01

    There are more than 100 of them in the LHC ring and they have a total of about 400 degrees of freedom. Each one has 4 motors and the newest ones have their own beam-monitoring pickups. Their jaws constrain the relativistic, high-energy particles to a very small transverse area and protect the machine aperture. We are speaking about the LHC collimators, those ultra-precise instruments that leave escaping unstable particles no chance.   The internal structure of a new LHC collimator featuring (see red arrow) one of the beam position monitor's pickups. Designed at CERN but mostly produced by very specialised manufacturers in Europe, the LHC collimators are among the most complex elements of the accelerator. Their job is to control and safely dispose of the halo particles that are produced by unavoidable beam losses from the circulating beam core. “The LHC collimation system has been designed to ensure that beam losses in superconducting magnets remain below quench limits in al...

  3. The Age of Precision Cosmology

    Science.gov (United States)

    Chuss, David T.

    2012-01-01

    In the past two decades, our understanding of the evolution and fate of the universe has increased dramatically. This "Age of Precision Cosmology" has been ushered in by measurements that have both elucidated the details of the Big Bang cosmology and set the direction for future lines of inquiry. Our universe appears to consist of 5% baryonic matter; 23% of the universe's energy content is dark matter which is responsible for the observed structure in the universe; and 72% of the energy density is so-called "dark energy" that is currently accelerating the expansion of the universe. In addition, our universe has been measured to be geometrically flat to 1 %. These observations and related details of the Big Bang paradigm have hinted that the universe underwent an epoch of accelerated expansion known as Uinflation" early in its history. In this talk, I will review the highlights of modern cosmology, focusing on the contributions made by measurements of the cosmic microwave background, the faint afterglow of the Big Bang. I will also describe new instruments designed to measure the polarization of the cosmic microwave background in order to search for evidence of cosmic inflation.

  4. High precision redundant robotic manipulator

    International Nuclear Information System (INIS)

    Young, K.K.D.

    1998-01-01

    A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs

  5. Studying antimatter with laser precision

    CERN Multimedia

    Katarina Anthony

    2012-01-01

    The next generation of antihydrogen trapping devices, ALPHA-2, is moving into CERN’s Antiproton Decelerator (AD) hall. This brand-new experiment will allow the ALPHA collaboration to conduct studies of antimatter with greater precision. ALPHA spokesperson Jeffrey Hangst was recently awarded a grant by the Carlsberg Foundation, which will be used to purchase equipment for the new experiment.   A 3-D view of the new magnet (in blue) and cryostat. The red lines show the paths of laser beams. LHC-type current leads for the superconducting magnets are visible on the top-right of the image. The ALPHA collaboration has been working to trap and study antihydrogen since 2006. Using antiprotons provided by CERN’s Antiproton Decelerator (AD), ALPHA was the first experiment to trap antihydrogen and to hold it long enough to study its properties. “The new ALPHA-2 experiment will use integrated lasers to probe the trapped antihydrogen,” explains Jeffrey Hangst, ALP...

  6. Precision cosmology the first half million years

    CERN Document Server

    Jones, Bernard J T

    2017-01-01

    Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students i...

  7. Sfermion Precision Measurements at a Linear Collider

    CERN Document Server

    Freitas, A.; Ananthanarayan, B.; Bartl, A.; Blair, G.A.; Blochinger, C.; Boos, E.; Brandenburg, A.; Datta, A.; Djouadi, A.; Fraas, H.; Guasch, J.; Hesselbach, S.; Hidaka, K.; Hollik, W.; Kernreiter, T.; Maniatis, M.; von Manteuffel, A.; Martyn, H.U.; Miller, D.J.; Moortgat-Pick, Gudrid A.; Muhlleitner, M.; Nauenberg, U.; Kluge, Hannelies; Porod, W.; Sola, J.; Sopczak, A.; Stahl, A.; Weber, M.M.; Zerwas, P.M.

    2002-01-01

    At future e+- e- linear colliders, the event rates and clean signals of scalar fermion production - in particular for the scalar leptons - allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected for the third generation sfermions opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan(beta) from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses.

  8. Precise delay measurement through combinatorial logic

    Science.gov (United States)

    Burke, Gary R. (Inventor); Chen, Yuan (Inventor); Sheldon, Douglas J. (Inventor)

    2010-01-01

    A high resolution circuit and method for facilitating precise measurement of on-chip delays for FPGAs for reliability studies. The circuit embeds a pulse generator on an FPGA chip having one or more groups of LUTS (the "LUT delay chain"), also on-chip. The circuit also embeds a pulse width measurement circuit on-chip, and measures the duration of the generated pulse through the delay chain. The pulse width of the output pulse represents the delay through the delay chain without any I/O delay. The pulse width measurement circuit uses an additional asynchronous clock autonomous from the main clock and the FPGA propagation delay can be displayed on a hex display continuously for testing purposes.

  9. Digitalization of highly precise fluxgate magnetometers

    DEFF Research Database (Denmark)

    Cerman, Ales; Kuna, A.; Ripka, P.

    2005-01-01

    This paper describes the theory behind all three known ways of digitalizing the fluxgate magnetometers: analogue magnetometers with digitalized output using high resolution ADC, application of the delta-sigma modulation to the sensor feedback loop and fully digital signal detection. At present time...... the Delta-Sigma ADCs are mostly used for the digitalization of the highly precise fluxgate magnetorneters. The relevant part of the paper demonstrates some pitfalls of their application studied during the design of the magnetometer for the new Czech scientific satellite MIMOSA. The part discussing...... the application of the A-E modulation to the sensor feedback loop theoretically derives the main advantage of this method-increasing of the modulation order and shows its real potential compared to the analog magnetometer with consequential digitalization. The comparison is realized on the modular magnetometer...

  10. Atomic physics precise measurements and ultracold matter

    CERN Document Server

    Inguscio, Massimo

    2013-01-01

    Atomic Physics provides an expert guide to two spectacular new landscapes in physics: precision measurements, which have been revolutionized by the advent of the optical frequency comb, and atomic physics, which has been revolutionized by laser cooling. These advances are not incremental but transformative: they have generated a consilience between atomic and many-body physics, precipitated an explosion of scientific and technological applications, opened new areas of research, and attracted a brilliant generation of younger scientists. The research is advancing so rapidly, the barrage of applications is so dazzling, that students can be bewildered. For both students and experienced scientists, this book provides an invaluable description of basic principles, experimental methods, and scientific applications.

  11. Micropropulsion Systems for Precision Controlled Space Flight

    DEFF Research Database (Denmark)

    Larsen, Jack

    . This project is thus concentrating on developing a method by which an entire, ecient, control system compensating for the disturbances from the space environment and thereby enabling precision formation flight can be realized. The space environment is initially studied and the knowledge gained is used......Space science is subject to a constantly increasing demand for larger coherence lengths or apertures of the space observation systems, which in turn translates into a demand for increased dimensions and subsequently cost and complexity of the systems. When this increasing demand reaches...... the pratical limitations of increasing the physical dimensions of the spacecrafts, the observation platforms will have to be distributed on more spacecrafts flying in very accurate formations. Consequently, the observation platform becomes much more sensitive to disturbances from the space environment...

  12. Precision Cosmology: The First Half Million Years

    Science.gov (United States)

    Jones, Bernard J. T.

    2017-06-01

    Cosmology seeks to characterise our Universe in terms of models based on well-understood and tested physics. Today we know our Universe with a precision that once would have been unthinkable. This book develops the entire mathematical, physical and statistical framework within which this has been achieved. It tells the story of how we arrive at our profound conclusions, starting from the early twentieth century and following developments up to the latest data analysis of big astronomical datasets. It provides an enlightening description of the mathematical, physical and statistical basis for understanding and interpreting the results of key space- and ground-based data. Subjects covered include general relativity, cosmological models, the inhomogeneous Universe, physics of the cosmic background radiation, and methods and results of data analysis. Extensive online supplementary notes, exercises, teaching materials, and exercises in Python make this the perfect companion for researchers, teachers and students in physics, mathematics, and astrophysics.

  13. Sfermion precision measurements at a linear collider

    International Nuclear Information System (INIS)

    Freitas, A.; Ananthanarayan, B.; Bartl, A.; Blair, G.; Bloechinger, C.; Boos, E.; Brandenburg, A.; Datta, A.; Djouadi, A.; Fraas, H.; Guasch, J.; Hesselbach, S.; Hidaka, K.; Hollik, W.; Kernreiter, T.; Maniatis, M.; Manteuffel, A. von; Martyn, H.-U.; Miller, D.J.; Moortgat-Pick, G.; Muehlleitner, M.; Nauenberg, U.; Nowak, H.; Porod, W.; Sola, J.; Sopczak, A.; Stahl, A.; Weber, M.M.; Zerwas, P.M.

    2003-01-01

    At prospective e ± e - linear colliders, the large cross-sections and clean signals of scalar fermion production--in particular for the scalar leptons - allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected in the third generation opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan β from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses

  14. Sfermion precision measurements at a linear collider

    International Nuclear Information System (INIS)

    Freitas, A.

    2003-01-01

    At future e + e - linear colliders, the event rates and clean signals of scalar fermion production--in particular for the scalar leptons--allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected for the third generation sfermions opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan β from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses

  15. Precision phase estimation based on weak-value amplification

    Science.gov (United States)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  16. Precision half-life measurement of 11C: The most precise mirror transition F t value

    Science.gov (United States)

    Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.

    2018-03-01

    Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.

  17. Computer-determined assay time based on preset precision

    International Nuclear Information System (INIS)

    Foster, L.A.; Hagan, R.; Martin, E.R.; Wachter, J.R.; Bonner, C.A.; Malcom, J.E.

    1994-01-01

    Most current assay systems for special nuclear materials (SNM) operate on the principle of a fixed assay time which provides acceptable measurement precision without sacrificing the required throughput of the instrument. Waste items to be assayed for SNM content can contain a wide range of nuclear material. Counting all items for the same preset assay time results in a wide range of measurement precision and wastes time at the upper end of the calibration range. A short time sample taken at the beginning of the assay could optimize the analysis time on the basis of the required measurement precision. To illustrate the technique of automatically determining the assay time, measurements were made with a segmented gamma scanner at the Plutonium Facility of Los Alamos National Laboratory with the assay time for each segment determined by counting statistics in that segment. Segments with very little SNM were quickly determined to be below the lower limit of the measurement range and the measurement was stopped. Segments with significant SNM were optimally assays to the preset precision. With this method the total assay time for each item is determined by the desired preset precision. This report describes the precision-based algorithm and presents the results of measurements made to test its validity

  18. Precision and Accuracy in PDV and VISAR

    Energy Technology Data Exchange (ETDEWEB)

    Ambrose, W. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-22

    This is a technical report discussing our current level of understanding of a wide and varying distribution of uncertainties in velocity results from Photonic Doppler Velocimetry in its application to gas gun experiments. Using propagation of errors methods with statistical averaging of photon number fluctuation in the detected photocurrent and subsequent addition of electronic recording noise, we learn that the velocity uncertainty in VISAR can be written in closed form. For PDV, the non-linear frequency transform and peak fitting methods employed make propagation of errors estimates notoriously more difficult to write down in closed form expect in the limit of constant velocity and low time resolution (large analysis-window width). An alternative method of error propagation in PDV is to use Monte Carlo methods with a simulation of the time domain signal based on results from the spectral domain. A key problem for Monte Carlo estimation for an experiment is a correct estimate of that portion of the time-domain noise associated with the peak-fitting region-of-interesting in the spectral domain. Using short-time Fourier transformation spectral analysis and working with the phase dependent real and imaginary parts allows removal of amplitude-noise cross terms that invariably show up when working with correlation-based methods or FFT power spectra. Estimation of the noise associated with a given spectral region of interest is then possible. At this level of progress, we learn that Monte Carlo trials with random recording noise and initial (uncontrolled) phase yields velocity uncertainties that are not as large as those observed. In a search for additional noise sources, a speckleinterference modulation contribution with off axis rays was investigated, and was found to add a velocity variation beyond that from the recording noise (due to random interference between off axis rays), but in our experiments the speckle modulation precision was not as important as the

  19. Characterisation of surface roughness for ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Li Huifen; Cheung, C F; Lee, W B; To, S; Jiang, X Q

    2005-01-01

    Ultra-precision freeform surfaces are widely used in many advanced optics applications which demand for having surface roughness down to nanometer range. Although a lot of research work has been reported on the study of surface generation, reconstruction and surface characterization such as MOTIF and fractal analysis, most of them are focused on axial symmetric surfaces such as aspheric surfaces. Relative little research work has been found in the characterization of surface roughness in ultra-precision freeform surfaces. In this paper, a novel Robust Gaussian Filtering (RGF) method is proposed for the characterisation of surface roughness for ultra-precision freeform surfaces with known mathematic model or a cloud of discrete points. A series of computer simulation and measurement experiments were conducted to verify the capability of the proposed method. The experimental results were found to agree well with the theoretical results

  20. An Empirical Study of Precise Interprocedural Array Analysis

    Directory of Open Access Journals (Sweden)

    Michael Hind

    1994-01-01

    Full Text Available In this article we examine the role played by the interprocedural analysis of array accesses in the automatic parallelization of Fortran programs. We use the PTRAN system to provide measurements of several benchmarks to compare different methods of representing interprocedurally accessed arrays. We examine issues concerning the effectiveness of automatic parallelization using these methods and the efficiency of a precise summarization method.

  1. [Progress in precision medicine: a scientific perspective].

    Science.gov (United States)

    Wang, B; Li, L M

    2017-01-10

    Precision medicine is a new strategy for disease prevention and treatment by taking into account differences in genetics, environment and lifestyles among individuals and making precise diseases classification and diagnosis, which can provide patients with personalized, targeted prevention and treatment. Large-scale population cohort studies are fundamental for precision medicine research, and could produce best evidence for precision medicine practices. Current criticisms on precision medicine mainly focus on the very small proportion of benefited patients, the neglect of social determinants for health, and the possible waste of limited medical resources. In spite of this, precision medicine is still a most hopeful research area, and would become a health care practice model in the future.

  2. Precision Medicine, Cardiovascular Disease and Hunting Elephants.

    Science.gov (United States)

    Joyner, Michael J

    2016-01-01

    Precision medicine postulates improved prediction, prevention, diagnosis and treatment of disease based on patient specific factors especially DNA sequence (i.e., gene) variants. Ideas related to precision medicine stem from the much anticipated "genetic revolution in medicine" arising seamlessly from the human genome project (HGP). In this essay I deconstruct the concept of precision medicine and raise questions about the validity of the paradigm in general and its application to cardiovascular disease. Thus far precision medicine has underperformed based on the vision promulgated by enthusiasts. While niche successes for precision medicine are likely, the promises of broad based transformation should be viewed with skepticism. Open discussion and debate related to precision medicine are urgently needed to avoid misapplication of resources, hype, iatrogenic interventions, and distraction from established approaches with ongoing utility. Failure to engage in such debate will lead to negative unintended consequences from a revolution that might never come. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. GTCBio's Precision Medicine Conference (July 7-8, 2016 - Boston, Massachusetts, USA).

    Science.gov (United States)

    Cole, P

    2016-09-01

    GTCBio's Precision Medicine Conference met this year to outline the many steps forward that precision medicine and individualized genomics has made and the challenges it still faces in technological, modeling, and standards development, interoperability and compatibility advancements, and methods of economic and societal adoption. The conference was split into four sections, 'Overcoming Challenges in the Commercialization of Precision Medicine', 'Implementation of Precision Medicine: Strategies & Technologies', 'Integrating & Interpreting Personal Genomics, Big Data, & Bioinformatics' and 'Incentivizing Precision Medicine: Regulation & Reimbursement', with this report focusing on the final two subjects. Copyright 2016 Prous Science, S.A.U. or its licensors. All rights reserved.

  4. The Development of Precise Engineering Surveying Technology

    Directory of Open Access Journals (Sweden)

    LI Guangyun

    2017-10-01

    Full Text Available With the construction of big science projects in China, the precise engineering surveying technology developed rapidly in the 21th century. Firstly, the paper summarized up the current development situation for the precise engineering surveying instrument and theory. Then the three typical cases of the precise engineering surveying practice such as accelerator alignment, industry measurement and high-speed railway surveying technology are focused.

  5. Modeling and control of precision actuators

    CERN Document Server

    Kiong, Tan Kok

    2013-01-01

    IntroductionGrowing Interest in Precise ActuatorsTypes of Precise ActuatorsApplications of Precise ActuatorsNonlinear Dynamics and ModelingHysteresisCreepFrictionForce RipplesIdentification and Compensation of Preisach Hysteresis in Piezoelectric ActuatorsSVD-Based Identification and Compensation of Preisach HysteresisHigh-Bandwidth Identification and Compensation of Hysteretic Dynamics in Piezoelectric ActuatorsConcluding RemarksIdentification and Compensation of Frict

  6. Precise GPS orbits for geodesy

    Science.gov (United States)

    Colombo, Oscar L.

    1994-01-01

    The Global Positioning System (GPS) has become, in recent years, the main space-based system for surveying and navigation in many military, commercial, cadastral, mapping, and scientific applications. Better receivers, interferometric techniques (DGPS), and advances in post-processing methods have made possible to position fixed or moving receivers with sub-decimeter accuracies in a global reference frame. Improved methods for obtaining the orbits of the GPS satellites have played a major role in these achievements; this paper gives a personal view of the main developments in GPS orbit determination.

  7. The High Road to Astronomical Photometric Precision : Differential Photometry

    NARCIS (Netherlands)

    Milone, E. F.; Pel, Jan Willem

    2011-01-01

    Differential photometry offers the most precise method for measuring the brightness of astronomical objects. We attempt to demonstrate why this should be the case, and then describe how well it has been done through a review of the application of differential techniques from the earliest visual

  8. Precision tests of quantum chromodynamics and the standard model

    International Nuclear Information System (INIS)

    Brodsky, S.J.; Lu, H.J.

    1995-06-01

    The authors discuss three topics relevant to testing the Standard Model to high precision: commensurate scale relations, which relate observables to each other in perturbation theory without renormalization scale or scheme ambiguity, the relationship of compositeness to anomalous moments, and new methods for measuring the anomalous magnetic and quadrupole moments of the W and Z

  9. A precision nutrient variability study of an experimental plot in ...

    African Journals Online (AJOL)

    DR F O ADEKAYODE

    reported (Sadeghi et al., 2006; Shah et al., 2013). The objective of the research was to use the GIS kriging technique to produce precision soil nutrient concentration and fertility maps of a 2.5-ha experimental land in Mukono Agricultural Research and Development. Institute Mukono, Uganda. MATERIALS AND METHODS.

  10. Precision of Points Computed from Intersections of Lines or Planes

    DEFF Research Database (Denmark)

    Cederholm, Jens Peter

    2004-01-01

    estimates the precision of the points. When using laser scanning a similar problem appears. A laser scanner captures a 3-D point cloud, not the points of real interest. The suggested method can be used to compute three-dimensional coordinates of the intersection of three planes estimated from the point...

  11. Compound-Specific Chlorine Isotope Analysis of Tetrachloromethane and Trichloromethane by Gas Chromatography-Isotope Ratio Mass Spectrometry vs Gas Chromatography-Quadrupole Mass Spectrometry: Method Development and Evaluation of Precision and Trueness.

    Science.gov (United States)

    Heckel, Benjamin; Rodríguez-Fernández, Diana; Torrentó, Clara; Meyer, Armin; Palau, Jordi; Domènech, Cristina; Rosell, Mònica; Soler, Albert; Hunkeler, Daniel; Elsner, Martin

    2017-03-21

    Compound-specific chlorine isotope analysis of tetrachloromethane (CCl 4 ) and trichloromethane (CHCl 3 ) was explored by both, gas chromatography-isotope ratio mass spectrometry (GC-IRMS) and GC-quadrupole MS (GC-qMS), where GC-qMS was validated in an interlaboratory comparison between Munich and Neuchâtel with the same type of commercial GC-qMS instrument. GC-IRMS measurements analyzed CCl isotopologue ions, whereas GC-qMS analyzed the isotopologue ions CCl 3 , CCl 2 , CCl (of CCl 4 ) and CHCl 3 , CHCl 2 , CHCl (of CHCl 3 ), respectively. Lowest amount dependence (good linearity) was obtained (i) in H-containing fragment ions where interference of 35 Cl- to 37 Cl-containing ions was avoided; (ii) with tuning parameters favoring one predominant rather than multiple fragment ions in the mass spectra. Optimized GC-qMS parameters (dwell time 70 ms, 2 most abundant ions) resulted in standard deviations of 0.2‰ (CHCl 3 ) and 0.4‰ (CCl 4 ), which are only about twice as large as 0.1‰ and 0.2‰ for GC-IRMS. To compare also the trueness of both methods and laboratories, samples from CCl 4 and CHCl 3 degradation experiments were analyzed and calibrated against isotopically different reference standards for both CCl 4 and CHCl 3 (two of each). Excellent agreement confirms that true results can be obtained by both methods provided that a consistent set of isotopically characterized reference materials is used.

  12. The impact of different cone beam computed tomography and multi-slice computed tomography scan parameters on virtual three-dimensional model accuracy using a highly precise ex vivo evaluation method.

    Science.gov (United States)

    Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian

    2016-05-01

    Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. Precise Plan in the analysis of volume precision in SynergyTM conebeam CT image

    International Nuclear Information System (INIS)

    Bai Sen; Xu Qingfeng; Zhong Renming; Jiang Xiaoqin; Jiang Qingfeng; Xu Feng

    2007-01-01

    Objective: A method of checking the volume precision in Synergy TM conebeam CT image. Methods: To scan known phantoms (big, middle, small spheres, cubes and cuniform cavum) at different positions (CBCT centre and departure centre from 5, 8, 10 cm along the accelerator G-T way)with conebeam CT, the phantom volume of reconstructed images were measure. Then to compared measured volume of Synergy TM conebeam CT with fanbeam CT results and nominal values. Results: The middle spheres had 1.5% discrepancy in nominal values and metrical average values at CBCT centre and departure from centre 5, 8 cm along accelerator G-T way. The small spheres showed 8.1%, with 0.8 % of the big cube and 2.9% of small cube, in nominal values and metrical average values at CBCT centre and departure from centre 5, 8, 10 cm along the accelerator G-T way. Conclusion: In valid scan range of Synergy TM conebeam CT, reconstructed precision is independent of the distance deviation from the center. (authors)

  14. Applications of Laser Precisely Processing Technology in Solar Cells

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    According to the design method of laser resonator cavity, we optimized the primary parameters of resonator and utilized LD arrays symmetrically pumping manner to implementing output of the high-brightness laser in our laser cutter, then which was applied to precisely cutting the conductive film of CuInSe2 solar cells, the buried contact silicon solar cells' electrode groove, and perforating in wafer which is used to the emitter wrap through silicon solar cells. Laser processing precision was less than 40μm, the results have met solar cell's fabrication technology, and made finally the buried cells' conversion efficiency be improved from 18% to 21% .

  15. Precise fabrication of X-band accelerating structure

    International Nuclear Information System (INIS)

    Higo, T.; Sakai, H.; Higashi, Y.; Koike, S.; Takatomi, T.

    1994-01-01

    An accelerating structure with a/λ=0.16 is being fabricated to study a precise fabrication method. A frequency control of each cell better than 10 -4 level is required to realize a detuned structure. The present machining level is nearly 1 MHz/11.4 GHz in relative frequency error, which just satisfies the above requirement. To keep this machining precision, the diffusion bonding technique is found preferable to join the cells. Various diffusion conditions were tried. The frequency change can be less than 1 MHz/11.4 GHz and it can be controlled well better than that. (author)

  16. Innovation and optimization of a method of pump-probe polarimetry with pulsed laser beams in view of a precise measurement of parity violation in atomic cesium; Innovation et optimisation d'une methode de polarimetrie pompe-sonde avec des faisceaux laser impulsionnels en vue d'une mesure precise de violation de la parite dans l'atome de cesium

    Energy Technology Data Exchange (ETDEWEB)

    Chauvat, D

    1997-10-15

    While Parity Violation (PV) experiments on highly forbidden transitions have been using detection of fluorescence signals; our experiment uses a pump-probe scheme to detect the PV signal directly on a transmitted probe beam. A pulsed laser beam of linear polarisation {epsilon}{sub 1} excites the atoms on the 6S-7S cesium transition in a colinear electric field E || k(ex). The probe beam (k(pr) || k(ex)) of linear polarisation {epsilon}{sub 2} tuned to the transition 7S-6P(3/2) is amplified. The small asymmetry ({approx} 10{sup -6}) in the gain that depends on the handedness of the tri-hedron (E, {epsilon}{sub 1}, {epsilon}{sub 2}) is the manifestation of the PV effect. This is measured as an E-odd apparent rotation of the plane of polarization of the probe beam; using balanced mode polarimetry. New criteria of selection have been devised, that allow us to distinguish the true PV-signal against fake rotations due to electromagnetic interferences, geometrical effects, polarization imperfections, or stray transverse electric and magnetic fields. These selection criteria exploit the symmetry of the PV-rotation - linear dichroism - and the revolution symmetry of the experiment. Using these criteria it is not only possible to reject fake signals, but also to elucidate the underlying physical mechanisms and to measure the relevant defects of the apparatus. The present signal-to-noise ratio allows embarking in PV measurements to reach the 10% statistical accuracy. A 1% measurement still requires improvements. Two methods have been demonstrated. The first one exploits the amplification of the asymmetry at high gain - one major advantage provided by our detection method based on stimulated emission. The second method uses both a much higher incident intensity and a special dichroic component which magnifies tiny polarization rotations. (author)

  17. Determination of the acid value of instant noodles: interlaboratory study.

    Science.gov (United States)

    Hakoda, Akiko; Sakaida, Kenichi; Suzuki, Tadanao; Yasui, Akemi

    2006-01-01

    An interlaboratory study was performed to evaluate the method for determining the acid value of instant noodles, based on the Japanese Agricultural Standard (JAS), with extraction of lipid using petroleum ether at a volume of 100 mL to the test portion of 25 g. Thirteen laboratories participated and analyzed 5 test samples as blind duplicates. Statistical treatment revealed that the repeatability (RSDr) of acid value was noodles per unit weight, using the equation [acid value = percent free fatty acids (as oleic) x 1.99] and the extracted lipid contents. This method was shown to have acceptable precision by the present study.

  18. About the problems and perspectives of making precision compressor blades

    Directory of Open Access Journals (Sweden)

    V. E. Galiev

    2014-01-01

    Full Text Available The problems of manufacturing blades with high precision profile geometry are considered in the article. The variant of the technology under development rules out the use of mechanical processing methods for blades airfoil. The article consists of an introduction and six small sections.The introduction sets out the requirements for modern aircraft engines, makes a list of problems arisen in the process of their manufacturing, and marks the relevance of the work.The first section analyzes the existing technology of precision blades. There is an illustration reflecting the stages of the process. Their advantages and disadvantages are marked.The second section provides an illustration, which shows the system-based blades used in the manufacturing process and a model of the work piece using the technology being developed. An analysis of each basing scheme is presented.In the third section we list the existing control methods of geometrical parameters of blades airfoil and present the measurement error data of devices. The special attention is paid to the impossibility to control the accuracy of geometrical parameters of precision blades.The fourth section presents the advantages of the electrochemical machining method with a consistent vibration of tool-electrode and with feeding the pulses of technology current over the traditional method. The article presents data accuracy and surface roughness of the blades airfoil reached owing to precision electrochemical machining. It illustrates machines that implement the given method of processing and components manufactured on them.The fifth section describes the steps of the developed process with justification for the use of the proposed operations.Based on the analysis, the author argues that the application of the proposed process to manufacture the precision compressor blades ensures producing the items that meet the requirements of the drawing.

  19. Light Microscopy at Maximal Precision

    Directory of Open Access Journals (Sweden)

    Matthew Bierbaum

    2017-10-01

    Full Text Available Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI. As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10–100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.

  20. Light Microscopy at Maximal Precision

    Science.gov (United States)

    Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.

    2017-10-01

    Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.

  1. Precision measurements at a muon collider

    International Nuclear Information System (INIS)

    Dawson, S.

    1995-01-01

    We discuss the potential for making precision measurements of M W and M T at a muon collider and the motivations for each measurement. A comparison is made with the precision measurements expected at other facilities. The measurement of the top quark decay width is also discussed

  2. Visual thread quality for precision miniature mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Gillespie, L.K.

    1981-04-01

    Threaded features have eight visual appearance factors which can affect their function in precision miniature mechanisms. The Bendix practice in deburring, finishing, and accepting these conditions on miniature threads is described as is their impact in assemblies of precision miniature electromechanical assemblies.

  3. Enabling Precision Cardiology Through Multiscale Biology and Systems Medicine

    Directory of Open Access Journals (Sweden)

    Kipp W. Johnson, BS

    2017-06-01

    Full Text Available Summary: The traditional paradigm of cardiovascular disease research derives insight from large-scale, broadly inclusive clinical studies of well-characterized pathologies. These insights are then put into practice according to standardized clinical guidelines. However, stagnation in the development of new cardiovascular therapies and variability in therapeutic response implies that this paradigm is insufficient for reducing the cardiovascular disease burden. In this state-of-the-art review, we examine 3 interconnected ideas we put forth as key concepts for enabling a transition to precision cardiology: 1 precision characterization of cardiovascular disease with machine learning methods; 2 the application of network models of disease to embrace disease complexity; and 3 using insights from the previous 2 ideas to enable pharmacology and polypharmacology systems for more precise drug-to-patient matching and patient-disease stratification. We conclude by exploring the challenges of applying a precision approach to cardiology, which arise from a deficit of the required resources and infrastructure, and emerging evidence for the clinical effectiveness of this nascent approach. Key Words: cardiology, clinical informatics, multi-omics, precision medicine, translational bioinformatics

  4. An Assessment of Imaging Informatics for Precision Medicine in Cancer.

    Science.gov (United States)

    Chennubhotla, C; Clarke, L P; Fedorov, A; Foran, D; Harris, G; Helton, E; Nordstrom, R; Prior, F; Rubin, D; Saltz, J H; Shalley, E; Sharma, A

    2017-08-01

    Objectives: Precision medicine requires the measurement, quantification, and cataloging of medical characteristics to identify the most effective medical intervention. However, the amount of available data exceeds our current capacity to extract meaningful information. We examine the informatics needs to achieve precision medicine from the perspective of quantitative imaging and oncology. Methods: The National Cancer Institute (NCI) organized several workshops on the topic of medical imaging and precision medicine. The observations and recommendations are summarized herein. Results: Recommendations include: use of standards in data collection and clinical correlates to promote interoperability; data sharing and validation of imaging tools; clinician's feedback in all phases of research and development; use of open-source architecture to encourage reproducibility and reusability; use of challenges which simulate real-world situations to incentivize innovation; partnership with industry to facilitate commercialization; and education in academic communities regarding the challenges involved with translation of technology from the research domain to clinical utility and the benefits of doing so. Conclusions: This article provides a survey of the role and priorities for imaging informatics to help advance quantitative imaging in the era of precision medicine. While these recommendations were drawn from oncology, they are relevant and applicable to other clinical domains where imaging aids precision medicine. Georg Thieme Verlag KG Stuttgart.

  5. Precision tests of CPT invariance with single trapped antiprotons

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Stefan [RIKEN, Ulmer Initiative Research Unit, Wako, Saitama (Japan); Collaboration: BASE-Collaboration

    2015-07-01

    The reason for the striking imbalance of matter and antimatter in our Universe has yet to be understood. This is the motivation and inspiration to conduct high precision experiments comparing the fundamental properties of matter and antimatter equivalents at lowest energies and with greatest precision. According to theory, the most sensitive tests of CPT invariance are measurements of antihydrogen ground-state hyperfine splitting as well as comparisons of proton and antiproton magnetic moments. Within the BASE collaboration we target the latter. By using a double Penning trap we performed very recently the first direct high precision measurement of the proton magnetic moment. The achieved fractional precision of 3.3 ppb improves the currently accepted literature value by a factor of 2.5. Application of the method to a single trapped antiproton will improve precision of the particles magnetic moment by more than a factor of 1000, thus providing one of the most stringent tests of CPT invariance. In my talk I report on the status and future perspectives of our efforts.

  6. An aberrant precision account of autism.

    Directory of Open Access Journals (Sweden)

    Rebecca P Lawson

    2014-05-01

    Full Text Available Autism is a neurodevelopmental disorder characterised by problems with social-communication, restricted interests and repetitive behaviour. A recent and controversial article presented a compelling normative explanation for the perceptual symptoms of autism in terms of a failure of Bayesian inference (Pellicano and Burr, 2012. In response, we suggested that when Bayesian interference is grounded in its neural instantiation – namely, predictive coding – many features of autistic perception can be attributed to aberrant precision (or beliefs about precision within the context of hierarchical message passing in the brain (Friston et al., 2013. Here, we unpack the aberrant precision account of autism. Specifically, we consider how empirical findings – that speak directly or indirectly to neurobiological mechanisms – are consistent with the aberrant encoding of precision in autism; in particular, an imbalance of the precision ascribed to sensory evidence relative to prior beliefs.

  7. Precision surveying the principles and geomatics practice

    CERN Document Server

    Ogundare, John Olusegun

    2016-01-01

    A comprehensive overview of high precision surveying, including recent developments in geomatics and their applications This book covers advanced precision surveying techniques, their proper use in engineering and geoscience projects, and their importance in the detailed analysis and evaluation of surveying projects. The early chapters review the fundamentals of precision surveying: the types of surveys; survey observations; standards and specifications; and accuracy assessments for angle, distance and position difference measurement systems. The book also covers network design and 3-D coordinating systems before discussing specialized topics such as structural and ground deformation monitoring techniques and analysis, mining surveys, tunneling surveys, and alignment surveys. Precision Surveying: The Principles and Geomatics Practice: * Covers structural and ground deformation monitoring analysis, advanced techniques in mining and tunneling surveys, and high precision alignment of engineering structures *...

  8. Precision fiducialization of transport components

    International Nuclear Information System (INIS)

    Fischer, G.E.; Bressler, V.E.; Cobb, J.K.; Jensen, D.R.; Ruland, R.E.; Walz, H.V.; Williams, S.H.

    1992-03-01

    The Final Focus Test Beam (FFTB) is a transport line designed to test both concept and advanced technology for application to future linear colliders. It is currently under construction at SLAC in the central beam line. Most of the quadrupoles of the FFTB have ab initio alignment tolerances of less than 30 microns, if the planned for beam based alignment tuning procedure is to converge. For such placement tolerances to have any meaning requires that the coordinates of the effective centers, seen by the beam particles, be tansferred to tooling (that can be reached by mechanical or optical alignment methods) located on the outside of the components to comparable or better values. We have constructed an apparatus that simultaneously locates to micron tolerances, the effective magnetic center of fussing lenses, as well as the electrical center of beam position monitors (BPM) imbedded therein, and once located, for transferring these coordinates to specially mounted tooling frames that supported the external retroreflectors used in a laser tracker based alignment of the beam line. Details of construction as well as experimental results from the method are presented

  9. PRECISION COSMOGRAPHY WITH STACKED VOIDS

    International Nuclear Information System (INIS)

    Lavaux, Guilhem; Wandelt, Benjamin D.

    2012-01-01

    We present a purely geometrical method for probing the expansion history of the universe from the observation of the shape of stacked voids in spectroscopic redshift surveys. Our method is an Alcock-Paczyński (AP) test based on the average sphericity of voids posited on the local isotropy of the universe. It works by comparing the temporal extent of cosmic voids along the line of sight with their angular, spatial extent. We describe the algorithm that we use to detect and stack voids in redshift shells on the light cone and test it on mock light cones produced from N-body simulations. We establish a robust statistical model for estimating the average stretching of voids in redshift space and quantify the contamination by peculiar velocities. Finally, assuming that the void statistics that we derive from N-body simulations is preserved when considering galaxy surveys, we assess the capability of this approach to constrain dark energy parameters. We report this assessment in terms of the figure of merit (FoM) of the dark energy task force and in particular of the proposed Euclid mission which is particularly suited for this technique since it is a spectroscopic survey. The FoM due to stacked voids from the Euclid wide survey may double that of all other dark energy probes derived from Euclid data alone (combined with Planck priors). In particular, voids seem to outperform baryon acoustic oscillations by an order of magnitude. This result is consistent with simple estimates based on mode counting. The AP test based on stacked voids may be a significant addition to the portfolio of major dark energy probes and its potentialities must be studied in detail.

  10. Toward precision medicine in Alzheimer's disease.

    Science.gov (United States)

    Reitz, Christiane

    2016-03-01

    In Western societies, Alzheimer's disease (AD) is the most common form of dementia and the sixth leading cause of death. In recent years, the concept of precision medicine, an approach for disease prevention and treatment that is personalized to an individual's specific pattern of genetic variability, environment and lifestyle factors, has emerged. While for some diseases, in particular select cancers and a few monogenetic disorders such as cystic fibrosis, significant advances in precision medicine have been made over the past years, for most other diseases precision medicine is only in its beginning. To advance the application of precision medicine to a wider spectrum of disorders, governments around the world are starting to launch Precision Medicine Initiatives, major efforts to generate the extensive scientific knowledge needed to integrate the model of precision medicine into every day clinical practice. In this article we summarize the state of precision medicine in AD, review major obstacles in its development, and discuss its benefits in this highly prevalent, clinically and pathologically complex disease.

  11. Design of precision position adjustable scoop

    International Nuclear Information System (INIS)

    Li Zhili; Zhang Kai; Dong Jinping

    2014-01-01

    In isotopes separation technologies, the centrifuge method has been the most popular technology now. Separation performance of centrifugal machines is greatly influenced by the flow field in the centrifugal machines. And the position of scoops in the centrifuges has a significant influence on the flow field. To obtain a better flow field characteristic and find the best position of scoops in the centrifuges, a position adjustable scoop system was studied. A micro stage and a linear encoder were used in the system to improve the position accuracy of the scoop. Eddy current sensors had been used in a position calibration measurement. The measurement result showed the sensitivity and stability of the position system could meet the performance expectation. But as the driving mean, the steel wire and pulley limit the control precision. On the basis of this scheme, an ultrasonic motor was used as driving mean. Experimental results showed the control accuracy was improved. This scheme laid a foundation to obtain internal flow field parameters of centrifuge and get the optimal feeding tube position. (authors)

  12. Precision kaon and hadron physics with KLOE

    International Nuclear Information System (INIS)

    Bossi, F.; De Lucia, E.; Lee-Franzini, J.; Miscetti, S.; Palutan, M.

    2008-01-01

    We describe the KLOE detector at DAΦNE, the Frascati φ, and its physics program. We begin with a brief description of the detector design and operation. Kaon physics is a major topic of investigation with KLOE thanks in part to the unique availability of pure K S , K L , K ± beams at a φ. We have measured all significant branching ratios of all kaon species, the K L and K ± lifetimes and the K → π form factor's t dependence. From the measurements we verify the validity of Cabibbo unitarity and lepton universality. We have studied properties of light scalar and pseudoscalar mesons with unprecedented accuracy. We have measured the e + e - → π + π - cross-section necessary for computing the major part of the hadronic contribution to the muon anomaly. The methods employed in all the above measurements as well as the φ leptonic width, precision mass measurements and searches for forbidden or extremely rare decays of kaons and η-mesons arc described. The impact of our results on flavor and hadron physics to date, as well as an outlook for further improvement in the near future, are discussed

  13. Precision Electrophile Tagging in Caenorhabditis elegans.

    Science.gov (United States)

    Long, Marcus J C; Urul, Daniel A; Chawla, Shivansh; Lin, Hong-Yu; Zhao, Yi; Haegele, Joseph A; Wang, Yiran; Aye, Yimon

    2018-01-16

    Adduction of an electrophile to privileged sensor proteins and the resulting phenotypically dominant responses are increasingly appreciated as being essential for metazoan health. Functional similarities between the biological electrophiles and electrophilic pharmacophores commonly found in covalent drugs further fortify the translational relevance of these small-molecule signals. Genetically encodable or small-molecule-based fluorescent reporters and redox proteomics have revolutionized the observation and profiling of cellular redox states and electrophile-sensor proteins, respectively. However, precision mapping between specific redox-modified targets and specific responses has only recently begun to be addressed, and systems tractable to both genetic manipulation and on-target redox signaling in vivo remain largely limited. Here we engineer transgenic Caenorhabditis elegans expressing functional HaloTagged fusion proteins and use this system to develop a generalizable light-controlled approach to tagging a prototypical electrophile-sensor protein with native electrophiles in vivo. The method circumvents issues associated with low uptake/distribution and toxicity/promiscuity. Given the validated success of C. elegans in aging studies, this optimized platform offers a new lens with which to scrutinize how on-target electrophile signaling influences redox-dependent life span regulation.

  14. Design of Janus Nanoparticles with Atomic Precision

    Science.gov (United States)

    Sun, Qiang; Wang, Qian; Jena, Puru; Kawazoe, Yoshi

    2008-03-01

    Janus nanoparticles, characterized by their anisotropic structure and interactions have added a new dimension to nanoscience because of their potential applications in biomedicine, sensors, catalysis and assembled materials. The technological applications of these nanoparticles, however, have been limited as the current chemical, physical, and biosynthetic methods lack sufficient size and shape selectivity. We report a technique where gold clusters doped with tungsten can serve as a seed that facilitates the natural growth of anisotropic nanostructures whose size and shape can be controlled with atomic precision. Using ab initio simulated annealing and molecular dynamics calculations on AunW (n>12) clusters, we discovered that the W@Au12 cage cluster forms a very stable core with the remaining Au atoms forming patchy structures on its surface. The anisotropic geometry gives rise to anisotropies in vibrational spectra, charge distributions, electronic structures, and reactivity, thus making it useful to have dual functionalities. In particular, the core-patch structure is shown to possess a hydrophilic head and a hydrophobic tail. The W@Au12 clusters can also be used as building blocks of a nano-ring with novel properties.

  15. Precision for B-meson matrix elements

    International Nuclear Information System (INIS)

    Guazzini, D.; Sommer, R.; Tantalo, N.

    2007-10-01

    We demonstrate how HQET and the Step Scaling Method for B-physics, pioneered by the Tor Vergata group, can be combined to reach a further improved precision. The observables considered are the mass of the b-quark and the B s -meson decay constant. The demonstration is carried out in quenched lattice QCD. We start from a small volume, where one can use a standard O(a)-improved relativistic action for the b-quark, and compute two step scaling functions which relate the observables to the large volume ones. In all steps we extrapolate to the continuum limit, separately in HQET and in QCD for masses below m b . The physical point m b is then reached by an interpolation of the continuum results in 1/m. The essential, expected and verified, feature is that the step scaling functions have a weak mass-dependence resulting in an easy interpolation to the physical point. With r 0 =0.5 fm and the experimental B s and K masses as input, we find F B s =191(6) MeV and the renormalization group invariant mass M b =6.88(10) GeV, translating into anti m b (anti m b )=4.42(6) GeV in the MS scheme. This approach seems very promising for full QCD. (orig.)

  16. Are torque values of preadjusted brackets precise?

    Directory of Open Access Journals (Sweden)

    Alessandra Motta Streva

    Full Text Available OBJECTIVE: The aim of the present study was to verify the torque precision of metallic brackets with MBT prescription using the canine brackets as the representative sample of six commercial brands. MATERIAL AND METHODS: Twenty maxillary and 20 mandibular canine brackets of one of the following commercial brands were selected: 3M Unitek, Abzil, American Orthodontics, TP Orthodontics, Morelli and Ortho Organizers. The torque angle, established by reference points and lines, was measured by an operator using an optical microscope coupled to a computer. The values were compared to those established by the MBT prescription. RESULTS: The results showed that for the maxillary canine brackets, only the Morelli torque (-3.33º presented statistically significant difference from the proposed values (-7º. For the mandibular canines, American Orthodontics (-6.34º and Ortho Organizers (-6.25º presented statistically significant differences from the standards (-6º. Comparing the brands, Morelli presented statistically significant differences in comparison with all the other brands for maxillary canine brackets. For the mandibular canine brackets, there was no statistically significant difference between the brands. CONCLUSIONS: There are significant variations in torque values of some of the brackets assessed, which would clinically compromise the buccolingual positioning of the tooth at the end of orthodontic treatment.

  17. Micro Machining Enhances Precision Fabrication

    Science.gov (United States)

    2007-01-01

    Advanced thermal systems developed for the Space Station Freedom project are now in use on the International Space Station. These thermal systems employ evaporative ammonia as their coolant, and though they employ the same series of chemical reactions as terrestrial refrigerators, the space-bound coolers are significantly smaller. Two Small Business Innovation Research (SBIR) contracts between Creare Inc. of Hanover, NH and Johnson Space Center developed an ammonia evaporator for thermal management systems aboard Freedom. The principal investigator for Creare Inc., formed Mikros Technologies Inc. to commercialize the work. Mikros Technologies then developed an advanced form of micro-electrical discharge machining (micro-EDM) to make tiny holes in the ammonia evaporator. Mikros Technologies has had great success applying this method to the fabrication of micro-nozzle array systems for industrial ink jet printing systems. The company is currently the world leader in fabrication of stainless steel micro-nozzles for this market, and in 2001 the company was awarded two SBIR research contracts from Goddard Space Flight Center to advance micro-fabrication and high-performance thermal management technologies.

  18. Advances in Precision Medicine: Tailoring Individualized Therapies.

    Science.gov (United States)

    Matchett, Kyle B; Lynam-Lennon, Niamh; Watson, R William; Brown, James A L

    2017-10-25

    The traditional bench-to-bedside pipeline involves using model systems and patient samples to provide insights into pathways deregulated in cancer. This discovery reveals new biomarkers and therapeutic targets, ultimately stratifying patients and informing cohort-based treatment options. Precision medicine (molecular profiling of individual tumors combined with established clinical-pathological parameters) reveals, in real-time, individual patient's diagnostic and prognostic risk profile, informing tailored and tumor-specific treatment plans. Here we discuss advances in precision medicine presented at the Irish Association for Cancer Research Annual Meeting, highlighting examples where personalized medicine approaches have led to precision discovery in individual tumors, informing customized treatment programs.

  19. High - speed steel for precise cased tools

    International Nuclear Information System (INIS)

    Karwiarz, J.; Mazur, A.

    2001-01-01

    The test results of high-vanadium high - speed steel (SWV9) for precise casted tools are presented. The face -milling cutters of NFCa80A type have been tested in industrial operating conditions. An average life - time of SWV9 steel tools was 3-10 times longer compare to the conventional high - speed milling cutters. Metallography of SWB9 precise casted steel revealed beneficial for tool properties distribution of primary vanadium carbides in the steel matrix. Presented results should be a good argument for wide application of high - vanadium high - speed steel for precise casted tools. (author)

  20. Precision medicine: opportunities, possibilities, and challenges for patients and providers.

    Science.gov (United States)

    Adams, Samantha A; Petersen, Carolyn

    2016-07-01

    Precision medicine approaches disease treatment and prevention by taking patients' individual variability in genes, environment, and lifestyle into account. Although the ideas underlying precision medicine are not new, opportunities for its more widespread use in practice have been enhanced by the development of large-scale databases, new methods for categorizing and representing patients, and computational tools for analyzing large datasets. New research methods may create uncertainty for both healthcare professionals and patients. In such situations, frameworks that address ethical, legal, and social challenges can be instrumental for facilitating trust between patients and providers, but must protect patients while not stifling progress or overburdening healthcare professionals. In this perspective, we outline several ethical, legal, and social issues related to the Precision Medicine Initiative's proposed changes to current institutions, values, and frameworks. This piece is not an exhaustive overview, but is intended to highlight areas meriting further study and action, so that precision medicine's goal of facilitating systematic learning and research at the point of care does not overshadow healthcare's goal of providing care to patients. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization

    Directory of Open Access Journals (Sweden)

    Jianjun Tang

    2014-01-01

    Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.

  2. Big Data’s Role in Precision Public Health

    Science.gov (United States)

    Dolley, Shawn

    2018-01-01

    Precision public health is an emerging practice to more granularly predict and understand public health risks and customize treatments for more specific and homogeneous subpopulations, often using new data, technologies, and methods. Big data is one element that has consistently helped to achieve these goals, through its ability to deliver to practitioners a volume and variety of structured or unstructured data not previously possible. Big data has enabled more widespread and specific research and trials of stratifying and segmenting populations at risk for a variety of health problems. Examples of success using big data are surveyed in surveillance and signal detection, predicting future risk, targeted interventions, and understanding disease. Using novel big data or big data approaches has risks that remain to be resolved. The continued growth in volume and variety of available data, decreased costs of data capture, and emerging computational methods mean big data success will likely be a required pillar of precision public health into the future. This review article aims to identify the precision public health use cases where big data has added value, identify classes of value that big data may bring, and outline the risks inherent in using big data in precision public health efforts. PMID:29594091

  3. Agricultural experts’ attitude towards precision agriculture: Evidence from Guilan Agricultural Organization, Northern Iran

    OpenAIRE

    Mohammad Sadegh Allahyari; Masoumeh Mohammadzadeh; Stefanos A. Nastis

    2016-01-01

    Identifying factors that influence the attitudes of agricultural experts regarding precision agriculture plays an important role in developing, promoting and establishing precision agriculture. The aim of this study was to identify factors affecting the attitudes of agricultural experts regarding the implementation of precision agriculture. A descriptive research design was employed as the research method. A research-made questionnaire was used to examine the agricultural experts’ attitude to...

  4. Precision mechatronics based on high-precision measuring and positioning systems and machines

    Science.gov (United States)

    Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert

    2007-06-01

    Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.

  5. Research on Ship Trajectory Tracking with High Precision Based on LOS

    Directory of Open Access Journals (Sweden)

    Hengzhi Liu

    2018-01-01

    Full Text Available Aiming at how precise to track by LOS, a method is proposed. The method combines the advantages of LOS simplicity and intuition, easy parameter setting and good convergence, with the features of GPC softening, multi-step prediction, rolling optimization and excellent controllability and robustness. In order to verify the effectiveness of the method, the method is simulated by Matlab. The simulation’s results show that it makes ship tracking highly precise.

  6. Precision test method by x-ray absorbent clay

    International Nuclear Information System (INIS)

    Nakadai, Toru; Matsukawa, Hideyuki; Sekita, Jun-ichiro; Murakoshi, Atsushi.

    1982-01-01

    In X-ray penetration photography of such as welds with reinforcing metal and castings of complex shape, the X-ray absorbent clay developed to eliminate various disadvantages of the conventional absorbents was further studied for better application. The results of the usage are as follows. Because the X-ray absorbent is clay, it is flexible in form, and gives good adhesion to test objects. In the welds and castings mentioned, it is effective for reducing the scattered ray, accordingly, it results in superior images. The following matters are described: contrast in radiographs, the required conditions for X-ray absorbents in general, the properties of the absorbent (absorption coefficient, consistency, density), improvement in radiographs by means of the X-ray absorbent clay (wall thickness compensation, masking, the application together with narrow-field irradiation photography). (Mori, K.)

  7. Method

    Directory of Open Access Journals (Sweden)

    Ling Fiona W.M.

    2017-01-01

    Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.

  8. High precision spectrophotometric analysis of thorium

    International Nuclear Information System (INIS)

    Palmieri, H.E.L.

    1984-01-01

    An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium when processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using dissodium ethylenediaminetetraacetate (EDTA) solution and alizarin-S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer programme. Besides the equivalence point, other parameters of titration were determined: the indicator concentration, the absorbance of the metal-indicator complex, and the stability constants of the metal-indicator and the metal-EDTA complexes. (Author) [pt

  9. Thorium spectrophotometric analysis with high precision

    International Nuclear Information System (INIS)

    Palmieri, H.E.L.

    1983-06-01

    An accurate and precise determination of thorium is proposed. Precision of about 0,1% is required for the determination of macroquantities of thorium processed. After an extensive literature search concerning this subject, spectrophotometric titration has been chosen, using disodium ethylenediaminetetraacetate (EDTA) solution and alizarin S as indicator. In order to obtain such a precision, an amount of 0,025 M EDTA solution precisely measured has been added and the titration was completed with less than 5 ml of 0,0025 M EDTA solution. It is usual to locate the end-point graphically, by plotting added titrant versus absorbance. The non-linear minimum square fit, using the Fletcher e Powell's minimization process and a computer program. (author)

  10. Cardiovascular Precision Medicine in the Genomics Era

    Directory of Open Access Journals (Sweden)

    Alexandra M. Dainis, BS

    2018-04-01

    Full Text Available Summary: Precision medicine strives to delineate disease using multiple data sources—from genomics to digital health metrics—in order to be more precise and accurate in our diagnoses, definitions, and treatments of disease subtypes. By defining disease at a deeper level, we can treat patients based on an understanding of the molecular underpinnings of their presentations, rather than grouping patients into broad categories with one-size-fits-all treatments. In this review, the authors examine how precision medicine, specifically that surrounding genetic testing and genetic therapeutics, has begun to make strides in both common and rare cardiovascular diseases in the clinic and the laboratory, and how these advances are beginning to enable us to more effectively define risk, diagnose disease, and deliver therapeutics for each individual patient. Key Words: genome sequencing, genomics, precision medicine, targeted therapeutics

  11. Equity and Value in 'Precision Medicine'.

    Science.gov (United States)

    Gray, Muir; Lagerberg, Tyra; Dombrádi, Viktor

    2017-04-01

    Precision medicine carries huge potential in the treatment of many diseases, particularly those with high-penetrance monogenic underpinnings. However, precision medicine through genomic technologies also has ethical implications. We will define allocative, personal, and technical value ('triple value') in healthcare and how this relates to equity. Equity is here taken to be implicit in the concept of triple value in countries that have publicly funded healthcare systems. It will be argued that precision medicine risks concentrating resources to those that already experience greater access to healthcare and power in society, nationally as well as globally. Healthcare payers, clinicians, and patients must all be involved in optimising the potential of precision medicine, without reducing equity. Throughout, the discussion will refer to the NHS RightCare Programme, which is a national initiative aiming to improve value and equity in the context of NHS England.

  12. The forthcoming era of precision medicine.

    Science.gov (United States)

    Gamulin, Stjepan

    2016-11-01

    The aim of this essay is to present the definition and principles of personalized or precision medicine, the perspective and barriers to its development and clinical application. The implementation of precision medicine in health care requires the coordinated efforts of all health care stakeholders (the biomedical community, government, regulatory bodies, patients' groups). Particularly, translational research with the integration of genomic and comprehensive data from all levels of the organism ("big data"), development of bioinformatics platforms enabling network analysis of disease etiopathogenesis, development of a legislative framework for handling personal data, and new paradigms of medical education are necessary for successful application of the concept of precision medicine in health care. In the present and future era of precision medicine, the collaboration of all participants in health care is necessary for its realization, resulting in improvement of diagnosis, prevention and therapy, based on a holistic, individually tailored approach. Copyright © 2016 by Academy of Sciences and Arts of Bosnia and Herzegovina.

  13. Epistemology, Ethics, and Progress in Precision Medicine.

    Science.gov (United States)

    Hey, Spencer Phillips; Barsanti-Innes, Brianna

    2016-01-01

    The emerging paradigm of precision medicine strives to leverage the tools of molecular biology to prospectively tailor treatments to the individual patient. Fundamental to the success of this movement is the discovery and validation of "predictive biomarkers," which are properties of a patient's biological specimens that can be assayed in advance of therapy to inform the treatment decision. Unfortunately, research into biomarkers and diagnostics for precision medicine has fallen well short of expectations. In this essay, we examine the portfolio of research activities into the excision repair cross complement group 1 (ERCC1) gene as a predictive biomarker for precision lung cancer therapy as a case study in elucidating the epistemological and ethical obstacles to developing new precision medicines.

  14. A Note on "Accuracy" and "Precision"

    Science.gov (United States)

    Stallings, William M.; Gillmore, Gerald M.

    1971-01-01

    Advocates the use of precision" rather than accuracy" in defining reliability. These terms are consistently differentiated in certain sciences. Review of psychological and measurement literature reveals, however, interchangeable usage of the terms in defining reliability. (Author/GS)

  15. Precision axial translator with high stability.

    Science.gov (United States)

    Bösch, M A

    1979-08-01

    We describe a new type of translator which is inherently stable against torsion and twisting. This concentric translator is also ideally suited for precise axial motion with clearance of the center line.

  16. Precision Munition Electro-Sciences Facility

    Data.gov (United States)

    Federal Laboratory Consortium — This facility allows the characterization of the electro-magnetic environment produced by a precision weapon in free flight. It can measure the radiofrequency (RF)...

  17. Precision electroweak physics at the Tevatron

    International Nuclear Information System (INIS)

    James, Eric B.

    2006-01-01

    An overview of Tevatron electroweak measurements performed by the CDF and Dφ experiments is presented. The current status and future prospects for high precision measurements of electroweak parameters and detailed studies of boson production are highlighted. (author)

  18. Precision Guidance with Impact Angle Requirements

    National Research Council Canada - National Science Library

    Ford, Jason

    2001-01-01

    This paper examines a weapon system precision guidance problem in which the objective is to guide a weapon onto a non-manoeuvring target so that a particular desired angle of impact is achieved using...

  19. Precise subtyping for synchronous multiparty sessions

    Directory of Open Access Journals (Sweden)

    Mariangiola Dezani-Ciancaglini

    2016-02-01

    Full Text Available The notion of subtyping has gained an important role both in theoretical and applicative domains: in lambda and concurrent calculi as well as in programming languages. The soundness and the completeness, together referred to as the preciseness of subtyping, can be considered from two different points of view: operational and denotational. The former preciseness has been recently developed with respect to type safety, i.e. the safe replacement of a term of a smaller type when a term of a bigger type is expected. The latter preciseness is based on the denotation of a type which is a mathematical object that describes the meaning of the type in accordance with the denotations of other expressions from the language. The result of this paper is the operational and denotational preciseness of the subtyping for a synchronous multiparty session calculus. The novelty of this paper is the introduction of characteristic global types to prove the operational completeness.

  20. Prospects for Precision Neutrino Cross Section Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Deborah A. [Fermilab

    2016-01-28

    The need for precision cross section measurements is more urgent now than ever before, given the central role neutrino oscillation measurements play in the field of particle physics. The definition of precision is something worth considering, however. In order to build the best model for an oscillation experiment, cross section measurements should span a broad range of energies, neutrino interaction channels, and target nuclei. Precision might better be defined not in the final uncertainty associated with any one measurement but rather with the breadth of measurements that are available to constrain models. Current experience shows that models are better constrained by 10 measurements across different processes and energies with 10% uncertainties than by one measurement of one process on one nucleus with a 1% uncertainty. This article describes the current status of and future prospects for the field of precision cross section measurements considering the metric of how many processes, energies, and nuclei have been studied.