WorldWideScience

Sample records for high accuracy verification

  1. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  2. High-level verification

    CERN Document Server

    Lerner, Sorin; Kundu, Sudipta

    2011-01-01

    Given the growing size and heterogeneity of Systems on Chip (SOC), the design process from initial specification to chip fabrication has become increasingly complex. This growing complexity provides incentive for designers to use high-level languages such as C, SystemC, and SystemVerilog for system-level design. While a major goal of these high-level languages is to enable verification at a higher level of abstraction, allowing early exploration of system-level designs, the focus so far for validation purposes has been on traditional testing techniques such as random testing and scenario-based

  3. Dose delivery verification and accuracy assessment of stereotaxy in stereotactic radiotherapy and radiosurgery

    International Nuclear Information System (INIS)

    Pelagade, S.M.; Bopche, T.T.; Namitha, K.; Munshi, M.; Bhola, S.; Sharma, H.; Patel, B.K.; Vyas, R.K.

    2008-01-01

    The outcome of stereotactic radiotherapy (SRT) and stereotactic radiosurgery (SRS) in both benign and malignant tumors within the cranial region highly depends on precision in dosimetry, dose delivery and the accuracy assessment of stereotaxy associated with the unit. The frames BRW (Brown-Roberts-Wells) and GTC (Gill- Thomas-Cosman) can facilitate accurate patient positioning as well as precise targeting of tumours. The implementation of this technique may result in a significant benefit as compared to conventional therapy. As the target localization accuracy is improved, the demand for treatment planning accuracy of a TPS is also increased. The accuracy of stereotactic X Knife treatment planning system has two components to verify: (i) the dose delivery verification and the accuracy assessment of stereotaxy; (ii) to ensure that the Cartesian coordinate system associated is well established within the TPS for accurate determination of a target position. Both dose delivery verification and target positional accuracy affect dose delivery accuracy to a defined target. Hence there is a need to verify these two components in quality assurance protocol. The main intention of this paper is to present our dose delivery verification procedure using cylindrical wax phantom and accuracy assessment (target position) of stereotaxy using Geometric Phantom on Elekta's Precise linear accelerator for stereotactic installation

  4. Dosimetric accuracy of Kodak EDR2 film for IMRT verifications.

    Science.gov (United States)

    Childress, Nathan L; Salehpour, Mohammad; Dong, Lei; Bloch, Charles; White, R Allen; Rosen, Isaac I

    2005-02-01

    Patient-specific intensity-modulated radiotherapy (IMRT) verifications require an accurate two-dimensional dosimeter that is not labor-intensive. We assessed the precision and reproducibility of film calibrations over time, measured the elemental composition of the film, measured the intermittency effect, and measured the dosimetric accuracy and reproducibility of calibrated Kodak EDR2 film for single-beam verifications in a solid water phantom and for full-plan verifications in a Rexolite phantom. Repeated measurements of the film sensitometric curve in a single experiment yielded overall uncertainties in dose of 2.1% local and 0.8% relative to 300 cGy. 547 film calibrations over an 18-month period, exposed to a range of doses from 0 to a maximum of 240 MU or 360 MU and using 6 MV or 18 MV energies, had optical density (OD) standard deviations that were 7%-15% of their average values. This indicates that daily film calibrations are essential when EDR2 film is used to obtain absolute dose results. An elemental analysis of EDR2 film revealed that it contains 60% as much silver and 20% as much bromine as Kodak XV2 film. EDR2 film also has an unusual 1.69:1 silver:halide molar ratio, compared with the XV2 film's 1.02:1 ratio, which may affect its chemical reactions. To test EDR2's intermittency effect, the OD generated by a single 300 MU exposure was compared to the ODs generated by exposing the film 1 MU, 2 MU, and 4 MU at a time to a total of 300 MU. An ion chamber recorded the relative dose of all intermittency measurements to account for machine output variations. Using small MU bursts to expose the film resulted in delivery times of 4 to 14 minutes and lowered the film's OD by approximately 2% for both 6 and 18 MV beams. This effect may result in EDR2 film underestimating absolute doses for patient verifications that require long delivery times. After using a calibration to convert EDR2 film's OD to dose values, film measurements agreed within 2% relative

  5. Determining the Accuracy of Crowdsourced Tweet Verification for Auroral Research

    Directory of Open Access Journals (Sweden)

    Nathan A. Case

    2016-12-01

    Full Text Available The Aurorasaurus project harnesses volunteer crowdsourcing to identify sightings of an aurora (the “northern/southern lights” posted by citizen scientists on Twitter. Previous studies have demonstrated that aurora sightings can be mined from Twitter with the caveat that there is a large background level of non-sighting tweets, especially during periods of low auroral activity. Aurorasaurus attempts to mitigate this, and thus increase the quality of its Twitter sighting data, by using volunteers to sift through a pre-filtered list of geolocated tweets to verify real-time aurora sightings. In this study, the current implementation of this crowdsourced verification system, including the process of geolocating tweets, is described and its accuracy (which, overall, is found to be 68.4% is determined. The findings suggest that citizen science volunteers are able to accurately filter out unrelated, spam-like, Twitter data but struggle when filtering out somewhat related, yet undesired, data. The citizen scientists particularly struggle with determining the real-time nature of the sightings, so care must be taken when relying on crowdsourced identification.

  6. Impact of the frequency of online verifications on the patient set-up accuracy and set-up margins

    Directory of Open Access Journals (Sweden)

    Mohamed Adel

    2011-08-01

    Full Text Available Abstract Purpose The purpose of the study was to evaluate the patient set-up error of different anatomical sites, to estimate the effect of different frequencies of online verifications on the patient set-up accuracy, and to calculate margins to accommodate for the patient set-up error (ICRU set-up margin, SM. Methods and materials Alignment data of 148 patients treated with inversed planned intensity modulated radiotherapy (IMRT or three-dimensional conformal radiotherapy (3D-CRT of the head and neck (n = 31, chest (n = 72, abdomen (n = 15, and pelvis (n = 30 were evaluated. The patient set-up accuracy was assessed using orthogonal megavoltage electronic portal images of 2328 fractions of 173 planning target volumes (PTV. In 25 patients, two PTVs were analyzed where the PTVs were located in different anatomical sites and treated in two different radiotherapy courses. The patient set-up error and the corresponding SM were retrospectively determined assuming no online verification, online verification once a week and online verification every other day. Results The SM could be effectively reduced with increasing frequency of online verifications. However, a significant frequency of relevant set-up errors remained even after online verification every other day. For example, residual set-up errors larger than 5 mm were observed on average in 18% to 27% of all fractions of patients treated in the chest, abdomen and pelvis, and in 10% of fractions of patients treated in the head and neck after online verification every other day. Conclusion In patients where high set-up accuracy is desired, daily online verification is highly recommended.

  7. Impact of the frequency of online verifications on the patient set-up accuracy and set-up margins

    International Nuclear Information System (INIS)

    Rudat, Volker; Hammoud, Mohamed; Pillay, Yogin; Alaradi, Abdul Aziz; Mohamed, Adel; Altuwaijri, Saleh

    2011-01-01

    The purpose of the study was to evaluate the patient set-up error of different anatomical sites, to estimate the effect of different frequencies of online verifications on the patient set-up accuracy, and to calculate margins to accommodate for the patient set-up error (ICRU set-up margin, SM). Alignment data of 148 patients treated with inversed planned intensity modulated radiotherapy (IMRT) or three-dimensional conformal radiotherapy (3D-CRT) of the head and neck (n = 31), chest (n = 72), abdomen (n = 15), and pelvis (n = 30) were evaluated. The patient set-up accuracy was assessed using orthogonal megavoltage electronic portal images of 2328 fractions of 173 planning target volumes (PTV). In 25 patients, two PTVs were analyzed where the PTVs were located in different anatomical sites and treated in two different radiotherapy courses. The patient set-up error and the corresponding SM were retrospectively determined assuming no online verification, online verification once a week and online verification every other day. The SM could be effectively reduced with increasing frequency of online verifications. However, a significant frequency of relevant set-up errors remained even after online verification every other day. For example, residual set-up errors larger than 5 mm were observed on average in 18% to 27% of all fractions of patients treated in the chest, abdomen and pelvis, and in 10% of fractions of patients treated in the head and neck after online verification every other day. In patients where high set-up accuracy is desired, daily online verification is highly recommended

  8. 40 CFR 1065.305 - Verifications for accuracy, repeatability, and noise.

    Science.gov (United States)

    2010-07-01

    ..., repeatability, and noise. 1065.305 Section 1065.305 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Verifications for accuracy, repeatability, and noise. (a) This section describes how to determine the accuracy, repeatability, and noise of an instrument. Table 1 of § 1065.205 specifies recommended values for individual...

  9. Verification of Positional Accuracy of ZVS3003 Geodetic Control ...

    African Journals Online (AJOL)

    The International GPS Service (IGS) has provided GPS orbit products to the scientific community with increased precision and timeliness. Many users interested in geodetic positioning have adopted the IGS precise orbits to achieve centimeter level accuracy and ensure long-term reference frame stability. Positioning with ...

  10. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    OpenAIRE

    Clark, T Justin; ter Riet, Gerben; Coomarasamy, Aravinthan; Khan, Khalid S

    2004-01-01

    Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects). Of these, 16 had immediate histological verification of diagnosis while 11 ha...

  11. Bias associated with delayed verification in test accuracy studies: accuracy of tests for endometrial hyperplasia may be much higher than we think!

    Directory of Open Access Journals (Sweden)

    Coomarasamy Aravinthan

    2004-05-01

    Full Text Available Abstract Background To empirically evaluate bias in estimation of accuracy associated with delay in verification of diagnosis among studies evaluating tests for predicting endometrial hyperplasia. Methods Systematic reviews of all published research on accuracy of miniature endometrial biopsy and endometr ial ultrasonography for diagnosing endometrial hyperplasia identified 27 test accuracy studies (2,982 subjects. Of these, 16 had immediate histological verification of diagnosis while 11 had verification delayed > 24 hrs after testing. The effect of delay in verification of diagnosis on estimates of accuracy was evaluated using meta-regression with diagnostic odds ratio (dOR as the accuracy measure. This analysis was adjusted for study quality and type of test (miniature endometrial biopsy or endometrial ultrasound. Results Compared to studies with immediate verification of diagnosis (dOR 67.2, 95% CI 21.7–208.8, those with delayed verification (dOR 16.2, 95% CI 8.6–30.5 underestimated the diagnostic accuracy by 74% (95% CI 7%–99%; P value = 0.048. Conclusion Among studies of miniature endometrial biopsy and endometrial ultrasound, diagnostic accuracy is considerably underestimated if there is a delay in histological verification of diagnosis.

  12. Verification of Kaplan turbine cam curves realization accuracy at power plant

    Directory of Open Access Journals (Sweden)

    Džepčeski Dane

    2016-01-01

    Full Text Available Sustainability of approximately constant value of Kaplan turbine efficiency, for relatively large net head changes, is a result of turbine runner variable geometry. Dependence of runner blades position change on guide vane opening represents the turbine cam curve. The cam curve realization accuracy is of great importance for the efficient and proper exploitation of turbines and consequently complete units. Due to the reasons mentioned above, special attention has been given to the tests designed for cam curves verification. The goal of this paper is to provide the description of the methodology and the results of the tests performed in the process of Kaplan turbine cam curves verification.

  13. Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification

    Energy Technology Data Exchange (ETDEWEB)

    Blottner, F.G.; Lopez, A.R.

    1998-10-01

    This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.

  14. High current high accuracy IGBT pulse generator

    International Nuclear Information System (INIS)

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 μF capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles

  15. Verification of the Time Accuracy of a Magnometer by Using a GPS Pulse Generator

    Directory of Open Access Journals (Sweden)

    Yasuhiro Minamoto

    2011-05-01

    Full Text Available The time accuracy of geomagnetic data is an important specification for one-second data distributions. We tested a procedure to verify the time accuracy of a fluxgate magnetometer by using a GPS pulse generator. The magnetometer was equipped with a high time resolution (100 Hz output, so the data delay could be checked directly. The delay detected from one-second data by a statistical method was larger than those from 0.1-s- and 0.01-s-resolution data. The test of the time accuracy revealed the larger delay and was useful for verifying the quality of the data.

  16. SU-C-207A-04: Accuracy of Acoustic-Based Proton Range Verification in Water

    International Nuclear Information System (INIS)

    Jones, KC; Sehgal, CM; Avery, S; Vander Stappen, F

    2016-01-01

    Purpose: To determine the accuracy and dose required for acoustic-based proton range verification (protoacoustics) in water. Methods: Proton pulses with 17 µs FWHM and instantaneous currents of 480 nA (5.6 × 10 7 protons/pulse, 8.9 cGy/pulse) were generated by a clinical, hospital-based cyclotron at the University of Pennsylvania. The protoacoustic signal generated in a water phantom by the 190 MeV proton pulses was measured with a hydrophone placed at multiple known positions surrounding the dose deposition. The background random noise was measured. The protoacoustic signal was simulated to compare to the experiments. Results: The maximum protoacoustic signal amplitude at 5 cm distance was 5.2 mPa per 1 × 10 7 protons (1.6 cGy at the Bragg peak). The background random noise of the measurement was 27 mPa. Comparison between simulation and experiment indicates that the hydrophone introduced a delay of 2.4 µs. For acoustic data collected with a signal-to-noise ratio (SNR) of 21, deconvolution of the protoacoustic signal with the proton pulse provided the most precise time-of-flight range measurement (standard deviation of 2.0 mm), but a systematic error (−4.5 mm) was observed. Conclusion: Based on water phantom measurements at a clinical hospital-based cyclotron, protoacoustics is a potential technique for measuring the proton Bragg peak range with 2.0 mm standard deviation. Simultaneous use of multiple detectors is expected to reduce the standard deviation, but calibration is required to remove systematic error. Based on the measured background noise and protoacoustic amplitude, a SNR of 5.3 is projected for a deposited dose of 2 Gy.

  17. Verification of the accuracy of Doppler broadened, self-shielded multigroup cross sections for fast power reactor applications

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1988-01-01

    Verification results for Doppler broadening and self-shielding are presented. One of the important results presented is that the original SIGMA1 method of numerical Doppler broadening has now been demonstrated to be inaccurate and not capable of producing results to within required accuracies. Fortunately, due to this study, the SIGMA1 method has been significantly improved and the new SIGMA1 is now capable of producing results to within required accuracies. Although this paper presents results based upon using only one code system, it is important to realize that the original SIGMA1 method is presently used in many cross-section processing code systems; the results of this paper indicate that unless these other code systems are updated to include the new SIGMA1 method, the results produced by these code systems could be very inaccurate. The objectives of the IAEA nuclear data processing code verification project are reviewed as well as the requirements for the accuracy of calculation of Doppler coefficients and the present status of these calculations. The initial results of Doppler broadening and self-shielding calculations are presented and the inconsistency of the results which led to the discovery of errors in the original SIGMA1 method of Doppler broadening are pointed out. Analysis of the errors found and improvements in the SIGMA1 method are presented. Improved results are presented in order to demonstrate that the new SIGMA1 method can produce results within required accuracies. Guidelines are presented to limit the uncertainty introduced due to cross-section processing in order to balance available computer resources to accuracy requirements. Finally cross-section processing code users are invited to participate in the IAEA processing code verification project in order to verify the accuracy of their calculated results. (author)

  18. Two high accuracy digital integrators for Rogowski current transducers

    Science.gov (United States)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  19. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  20. High accuracy FIONA-AFM hybrid imaging

    International Nuclear Information System (INIS)

    Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.

    2011-01-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  1. Accuracy of automated measurement and verification (M&V) techniques for energy savings in commercial buildings

    International Nuclear Information System (INIS)

    Granderson, Jessica; Touzani, Samir; Custodio, Claudine; Sohn, Michael D.; Jump, David; Fernandes, Samuel

    2016-01-01

    Highlights: • A testing procedure and metrics to asses the performance of whole-building M&V methods is presented. • The accuracy of ten baseline models is evaluated on measured data from 537 commercial buildings. • The impact of reducing the training period from 12-months to shorter time horizon is examined. - Abstract: Trustworthy savings calculations are critical to convincing investors in energy efficiency projects of the benefit and cost-effectiveness of such investments and their ability to replace or defer supply-side capital investments. However, today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of efficiency projects. They also require time-consuming manual data acquisition and often do not deliver results until years after the program period has ended. The rising availability of “smart” meters, combined with new analytical approaches to quantifying savings, has opened the door to conducting M&V more quickly and at lower cost, with comparable or improved accuracy. These meter- and software-based approaches, increasingly referred to as “M&V 2.0”, are the subject of surging industry interest, particularly in the context of utility energy efficiency programs. Program administrators, evaluators, and regulators are asking how M&V 2.0 compares with more traditional methods, how proprietary software can be transparently performance tested, how these techniques can be integrated into the next generation of whole-building focused efficiency programs. This paper expands recent analyses of public-domain whole-building M&V methods, focusing on more novel M&V 2.0 modeling approaches that are used in commercial technologies, as well as approaches that are documented in the literature, and/or developed by the academic building research community. We present a testing procedure and metrics to assess the performance of whole-building M&V methods. We then illustrate the test procedure

  2. High accuracy 3-D laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...

  3. Verification of examination procedures in clinical laboratory for imprecision, trueness and diagnostic accuracy according to ISO 15189:2012: a pragmatic approach.

    Science.gov (United States)

    Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario

    2017-08-28

    Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.

  4. The use of measurement uncertainty in nuclear materials accuracy and verification

    International Nuclear Information System (INIS)

    Alique, O.; Vaccaro, S.; Svedkauskaite, J.

    2015-01-01

    EURATOM nuclear safeguards are based on the nuclear operators’ accounting for and declaring of the amounts of nuclear materials in their possession, as well as on the European Commission verifying the correctness and completeness of such declarations by means of conformity assessment practices. Both the accountancy and the verification processes comprise the measurements of amounts and characteristics of nuclear materials. The uncertainties associated to these measurements play an important role in the reliability of the results of nuclear material accountancy and verification. The document “JCGM 100:2008 Evaluation of measurement data – Guide to the expression of uncertainty in measurement” - issued jointly by the International Bureau of Weights and Measures (BIPM) and international organisations for metrology, standardisation and accreditation in chemistry, physics and electro technology - describes a universal, internally consistent, transparent and applicable method for the evaluation and expression of uncertainty in measurements. This paper discusses different processes of nuclear materials accountancy and verification where measurement uncertainty plays a significant role. It also suggests the way measurement uncertainty could be used to enhance the reliability of the results of the nuclear materials accountancy and verification processes.

  5. Verification of Data Accuracy in Japan Congenital Cardiovascular Surgery Database Including Its Postprocedural Complication Reports.

    Science.gov (United States)

    Takahashi, Arata; Kumamaru, Hiraku; Tomotaki, Ai; Matsumura, Goki; Fukuchi, Eriko; Hirata, Yasutaka; Murakami, Arata; Hashimoto, Hideki; Ono, Minoru; Miyata, Hiroaki

    2018-03-01

    Japan Congenital Cardiovascluar Surgical Database (JCCVSD) is a nationwide registry whose data are used for health quality assessment and clinical research in Japan. We evaluated the completeness of case registration and the accuracy of recorded data components including postprocedural mortality and complications in the database via on-site data adjudication. We validated the records from JCCVSD 2010 to 2012 containing congenital cardiovascular surgery data performed in 111 facilities throughout Japan. We randomly chose nine facilities for site visit by the auditor team and conducted on-site data adjudication. We assessed whether the records in JCCVSD matched the data in the source materials. We identified 1,928 cases of eligible surgeries performed at the facilities, of which 1,910 were registered (99.1% completeness), with 6 cases of duplication and 1 inappropriate case registration. Data components including gender, age, and surgery time (hours) were highly accurate with 98% to 100% concordance. Mortality at discharge and at 30 and 90 postoperative days was 100% accurate. Among the five complications studied, reoperation was the most frequently observed, with 16 and 21 cases recorded in the database and source materials, respectively, having a sensitivity of 0.67 and a specificity of 0.99. Validation of JCCVSD database showed high registration completeness and high accuracy especially in the categorical data components. Adjudicated mortality was 100% accurate. While limited in numbers, the recorded cases of postoperative complications all had high specificities but had lower sensitivity (0.67-1.00). Continued activities for data quality improvement and assessment are necessary for optimizing the utility of these registries.

  6. Personal Verification/Identification via Analysis of the Peripheral ECG Leads: Influence of the Personal Health Status on the Accuracy

    Directory of Open Access Journals (Sweden)

    Irena Jekova

    2015-01-01

    Full Text Available Traditional means for identity validation (PIN codes, passwords, and physiological and behavioral biometric characteristics (fingerprint, iris, and speech are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (rI, II (rII, calculated from them first principal ECG component (rPCA, linear and nonlinear combinations between rI, rII, and rPCA. For the verification task, the one-to-one scenario is applied and threshold values for rI, rII, and rPCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension has been considered. In addition a common reference PTB dataset (14 healthy individuals with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%.

  7. High accuracy satellite drag model (HASDM)

    Science.gov (United States)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  8. Fast and High Accuracy Wire Scanner

    CERN Document Server

    Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A

    2009-01-01

    Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...

  9. Verification and validation guidelines for high integrity systems. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Hecht, H.; Hecht, M.; Dinsmore, G.; Hecht, S.; Tang, D. [SoHaR, Inc., Beverly Hills, CA (United States)

    1995-03-01

    High integrity systems include all protective (safety and mitigation) systems for nuclear power plants, and also systems for which comparable reliability requirements exist in other fields, such as in the process industries, in air traffic control, and in patient monitoring and other medical systems. Verification aims at determining that each stage in the software development completely and correctly implements requirements that were established in a preceding phase, while validation determines that the overall performance of a computer system completely and correctly meets system requirements. Volume I of the report reviews existing classifications for high integrity systems and for the types of errors that may be encountered, and makes recommendations for verification and validation procedures, based on assumptions about the environment in which these procedures will be conducted. The final chapter of Volume I deals with a framework for standards in this field. Volume II contains appendices dealing with specific methodologies for system classification, for dependability evaluation, and for two software tools that can automate otherwise very labor intensive verification and validation activities.

  10. Verification and validation guidelines for high integrity systems. Volume 1

    International Nuclear Information System (INIS)

    Hecht, H.; Hecht, M.; Dinsmore, G.; Hecht, S.; Tang, D.

    1995-03-01

    High integrity systems include all protective (safety and mitigation) systems for nuclear power plants, and also systems for which comparable reliability requirements exist in other fields, such as in the process industries, in air traffic control, and in patient monitoring and other medical systems. Verification aims at determining that each stage in the software development completely and correctly implements requirements that were established in a preceding phase, while validation determines that the overall performance of a computer system completely and correctly meets system requirements. Volume I of the report reviews existing classifications for high integrity systems and for the types of errors that may be encountered, and makes recommendations for verification and validation procedures, based on assumptions about the environment in which these procedures will be conducted. The final chapter of Volume I deals with a framework for standards in this field. Volume II contains appendices dealing with specific methodologies for system classification, for dependability evaluation, and for two software tools that can automate otherwise very labor intensive verification and validation activities

  11. SSME Alternate Turbopump Development Program: Design verification specification for high-pressure fuel turbopump

    Science.gov (United States)

    1989-01-01

    The design and verification requirements are defined which are appropriate to hardware at the detail, subassembly, component, and engine levels and to correlate these requirements to the development demonstrations which provides verification that design objectives are achieved. The high pressure fuel turbopump requirements verification matrix provides correlation between design requirements and the tests required to verify that the requirement have been met.

  12. Electron ray tracing with high accuracy

    International Nuclear Information System (INIS)

    Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.

    1986-01-01

    An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens

  13. Elasto-plastic benchmark calculations. Step 1: verification of the numerical accuracy of the computer programs

    International Nuclear Information System (INIS)

    Corsi, F.

    1985-01-01

    In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned

  14. Accuracy verification of PET-CT image fusion and its utilization in target delineation of radiotherapy

    International Nuclear Information System (INIS)

    Wang Xuetao; Yu Jinming; Yang Guoren; Gong Heyi

    2005-01-01

    Objective: Evaluate the accuracy of co-registration of PET and CT (PET-CT) images on line with phantom, and utilize it on patients to provide clinical evidence for target delineation in radiotherapy. Methods: A phantom with markers and different volume cylinders was infused with various concentrations of 18 FDG, and scanned at 4 mm by PET and CT respectively. After having been transmitted into GE eNTEGRA and treatment planning system (TPS) workstations, the images were fused and reconstructed. The distance between the markers and the errors were monitored in PET and CT images respectively. The volume of cylinder in PET and CT images were measured and compared by certain pixel value proportion deduction method. The same procedure was performed on the pulmonary tumor image in ten patients. Results: eNTEGRA and TPS workstations had a good length linearity, but the fusion error of the latter was markedly greater than the former. Tumors in different volume filled by varying concentrations of 18 FDG required different pixel deduction proportion. The cylinder volume of PET and CT images were almost the same, so were the images of pulmonary tumor of ten patients. Conclusions: The accuracy of image co-registration of PET-CT on line may fulfill the clinical demand. Pixel value proportion deduction method can be used for target delineation on PET image. (authors)

  15. Accuracy of self-reported drinking: observational verification of 'last occasion' drink estimates of young adults.

    Science.gov (United States)

    Northcote, Jeremy; Livingston, Michael

    2011-01-01

    As a formative step towards determining the accuracy of self-reported drinking levels commonly used for estimating population alcohol use, the validity of a 'last occasion' self-reporting approach is tested with corresponding field observations of participants' drinking quantity. This study is the first known attempt to validate the accuracy of self-reported alcohol consumption using data from a natural setting. A total of 81 young adults (aged 18-25 years) were purposively selected in Perth, Western Australia. Participants were asked to report the number of alcoholic drinks consumed at nightlife venues 1-2 days after being observed by peer-based researchers on 239 occasions. Complete observation data and self-report estimates were available for 129 sessions, which were fitted with multi-level models assessing the relationship between observed and reported consumption. Participants accurately estimated their consumption when engaging in light to moderate drinking (eight or fewer drinks in a single session), with no significant difference between the mean reported consumption and the mean observed consumption. In contrast, participants underestimated their own consumption by increasing amounts when engaging in heavy drinking of more than eight drinks. It is suggested that recent recall methods in self-report surveys are potentially reasonably accurate measures of actual drinking levels for light to moderate drinkers, but that underestimating of alcohol consumption increases with heavy consumption. Some of the possible reasons for underestimation of heavy drinking are discussed, with both cognitive and socio-cultural factors considered.

  16. Property-driven functional verification technique for high-speed vision system-on-chip processor

    Science.gov (United States)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  17. High accuracy in silico sulfotransferase models.

    Science.gov (United States)

    Cook, Ian; Wang, Ting; Falany, Charles N; Leyh, Thomas S

    2013-11-29

    Predicting enzymatic behavior in silico is an integral part of our efforts to understand biology. Hundreds of millions of compounds lie in targeted in silico libraries waiting for their metabolic potential to be discovered. In silico "enzymes" capable of accurately determining whether compounds can inhibit or react is often the missing piece in this endeavor. This problem has now been solved for the cytosolic sulfotransferases (SULTs). SULTs regulate the bioactivities of thousands of compounds--endogenous metabolites, drugs and other xenobiotics--by transferring the sulfuryl moiety (SO3) from 3'-phosphoadenosine 5'-phosphosulfate to the hydroxyls and primary amines of these acceptors. SULT1A1 and 2A1 catalyze the majority of sulfation that occurs during human Phase II metabolism. Here, recent insights into the structure and dynamics of SULT binding and reactivity are incorporated into in silico models of 1A1 and 2A1 that are used to identify substrates and inhibitors in a structurally diverse set of 1,455 high value compounds: the FDA-approved small molecule drugs. The SULT1A1 models predict 76 substrates. Of these, 53 were known substrates. Of the remaining 23, 21 were tested, and all were sulfated. The SULT2A1 models predict 22 substrates, 14 of which are known substrates. Of the remaining 8, 4 were tested, and all are substrates. The models proved to be 100% accurate in identifying substrates and made no false predictions at Kd thresholds of 100 μM. In total, 23 "new" drug substrates were identified, and new linkages to drug inhibitors are predicted. It now appears to be possible to accurately predict Phase II sulfonation in silico.

  18. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    International Nuclear Information System (INIS)

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  19. Fission product model for BWR analysis with improved accuracy in high burnup

    International Nuclear Information System (INIS)

    Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira

    1998-01-01

    A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)

  20. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  1. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  2. SU-D-BRC-03: Development and Validation of an Online 2D Dose Verification System for Daily Patient Plan Delivery Accuracy Check

    International Nuclear Information System (INIS)

    Zhao, J; Hu, W; Xing, Y; Wu, X; Li, Y

    2016-01-01

    Purpose: All plan verification systems for particle therapy are designed to do plan verification before treatment. However, the actual dose distributions during patient treatment are not known. This study develops an online 2D dose verification tool to check the daily dose delivery accuracy. Methods: A Siemens particle treatment system with a modulated scanning spot beam is used in our center. In order to do online dose verification, we made a program to reconstruct the delivered 2D dose distributions based on the daily treatment log files and depth dose distributions. In the log files we can get the focus size, position and particle number for each spot. A gamma analysis is used to compare the reconstructed dose distributions with the dose distributions from the TPS to assess the daily dose delivery accuracy. To verify the dose reconstruction algorithm, we compared the reconstructed dose distributions to dose distributions measured using PTW 729XDR ion chamber matrix for 13 real patient plans. Then we analyzed 100 treatment beams (58 carbon and 42 proton) for prostate, lung, ACC, NPC and chordoma patients. Results: For algorithm verification, the gamma passing rate was 97.95% for the 3%/3mm and 92.36% for the 2%/2mm criteria. For patient treatment analysis,the results were 97.7%±1.1% and 91.7%±2.5% for carbon and 89.9%±4.8% and 79.7%±7.7% for proton using 3%/3mm and 2%/2mm criteria, respectively. The reason for the lower passing rate for the proton beam is that the focus size deviations were larger than for the carbon beam. The average focus size deviations were −14.27% and −6.73% for proton and −5.26% and −0.93% for carbon in the x and y direction respectively. Conclusion: The verification software meets our requirements to check for daily dose delivery discrepancies. Such tools can enhance the current treatment plan and delivery verification processes and improve safety of clinical treatments.

  3. SU-D-BRC-03: Development and Validation of an Online 2D Dose Verification System for Daily Patient Plan Delivery Accuracy Check

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, J; Hu, W [Fudan University Shanghai Cancer Center, Shanghai, Shanghai (China); Xing, Y [Fudan univercity shanghai proton and heavy ion center, Shanghai (China); Wu, X [Fudan university shanghai proton and heavy ion center, Shanghai, shagnhai (China); Li, Y [Department of Medical physics at Shanghai Proton and Heavy Ion Center, Shanghai, Shanghai (China)

    2016-06-15

    Purpose: All plan verification systems for particle therapy are designed to do plan verification before treatment. However, the actual dose distributions during patient treatment are not known. This study develops an online 2D dose verification tool to check the daily dose delivery accuracy. Methods: A Siemens particle treatment system with a modulated scanning spot beam is used in our center. In order to do online dose verification, we made a program to reconstruct the delivered 2D dose distributions based on the daily treatment log files and depth dose distributions. In the log files we can get the focus size, position and particle number for each spot. A gamma analysis is used to compare the reconstructed dose distributions with the dose distributions from the TPS to assess the daily dose delivery accuracy. To verify the dose reconstruction algorithm, we compared the reconstructed dose distributions to dose distributions measured using PTW 729XDR ion chamber matrix for 13 real patient plans. Then we analyzed 100 treatment beams (58 carbon and 42 proton) for prostate, lung, ACC, NPC and chordoma patients. Results: For algorithm verification, the gamma passing rate was 97.95% for the 3%/3mm and 92.36% for the 2%/2mm criteria. For patient treatment analysis,the results were 97.7%±1.1% and 91.7%±2.5% for carbon and 89.9%±4.8% and 79.7%±7.7% for proton using 3%/3mm and 2%/2mm criteria, respectively. The reason for the lower passing rate for the proton beam is that the focus size deviations were larger than for the carbon beam. The average focus size deviations were −14.27% and −6.73% for proton and −5.26% and −0.93% for carbon in the x and y direction respectively. Conclusion: The verification software meets our requirements to check for daily dose delivery discrepancies. Such tools can enhance the current treatment plan and delivery verification processes and improve safety of clinical treatments.

  4. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    Science.gov (United States)

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  5. Verification & Validation of High-Order Short-Characteristics-Based Deterministic Transport Methodology on Unstructured Grids

    International Nuclear Information System (INIS)

    Azmy, Yousry; Wang, Yaqi

    2013-01-01

    The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code's numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory's Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.

  6. High accuracy wavelength calibration for a scanning visible spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  7. Verification and validation benchmarks.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  8. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  9. Verification and validation benchmarks

    International Nuclear Information System (INIS)

    Oberkampf, William L.; Trucano, Timothy G.

    2008-01-01

    Verification and validation (V and V) are the primary means to assess the accuracy and reliability of computational simulations. V and V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V and V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the

  10. Accuracy of cell calculation methods used for analysis of high conversion light water reactor lattice

    International Nuclear Information System (INIS)

    Jeong, Chang-Joon; Okumura, Keisuke; Ishiguro, Yukio; Tanaka, Ken-ichi

    1990-01-01

    Validation tests were made for the accuracy of cell calculation methods used in analyses of tight lattices of a mixed-oxide (MOX) fuel core in a high conversion light water reactor (HCLWR). A series of cell calculations was carried out for the lattices referred from an international HCLWR benchmark comparison, with emphasis placed on the resonance calculation methods; the NR, IR approximations, the collision probability method with ultra-fine energy group. Verification was also performed for the geometrical modelling; a hexagonal/cylindrical cell, and the boundary condition; mirror/white reflection. In the calculations, important reactor physics parameters, such as the neutron multiplication factor, the conversion ratio and the void coefficient, were evaluated using the above methods for various HCLWR lattices with different moderator to fuel volume ratios, fuel materials and fissile plutonium enrichments. The calculated results were compared with each other, and the accuracy and applicability of each method were clarified by comparison with continuous energy Monte Carlo calculations. It was verified that the accuracy of the IR approximation became worse when the neutron spectrum became harder. It was also concluded that the cylindrical cell model with the white boundary condition was not so suitable for MOX fuelled lattices, as for UO 2 fuelled lattices. (author)

  11. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    Science.gov (United States)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  12. High-accuracy measurements of the normal specular reflectance

    International Nuclear Information System (INIS)

    Voarino, Philippe; Piombini, Herve; Sabary, Frederic; Marteau, Daniel; Dubard, Jimmy; Hameury, Jacques; Filtz, Jean Remy

    2008-01-01

    The French Laser Megajoule (LMJ) is designed and constructed by the French Commissariata l'Energie Atomique (CEA). Its amplifying section needs highly reflective multilayer mirrors for the flash lamps. To monitor and improve the coating process, the reflectors have to be characterized to high accuracy. The described spectrophotometer is designed to measure normal specular reflectance with high repeatability by using a small spot size of 100 μm. Results are compared with ellipsometric measurements. The instrument can also perform spatial characterization to detect coating nonuniformity

  13. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1996-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  14. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, Eric M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  15. Why is a high accuracy needed in dosimetry

    International Nuclear Information System (INIS)

    Lanzl, L.H.

    1976-01-01

    Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of γ and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control

  16. Achieving High Accuracy in Calculations of NMR Parameters

    DEFF Research Database (Denmark)

    Faber, Rasmus

    quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...

  17. Optical Verification Laboratory Demonstration System for High Security Identification Cards

    Science.gov (United States)

    Javidi, Bahram

    1997-01-01

    Document fraud including unauthorized duplication of identification cards and credit cards is a serious problem facing the government, banks, businesses, and consumers. In addition, counterfeit products such as computer chips, and compact discs, are arriving on our shores in great numbers. With the rapid advances in computers, CCD technology, image processing hardware and software, printers, scanners, and copiers, it is becoming increasingly easy to reproduce pictures, logos, symbols, paper currency, or patterns. These problems have stimulated an interest in research, development and publications in security technology. Some ID cards, credit cards and passports currently use holograms as a security measure to thwart copying. The holograms are inspected by the human eye. In theory, the hologram cannot be reproduced by an unauthorized person using commercially-available optical components; in practice, however, technology has advanced to the point where the holographic image can be acquired from a credit card-photographed or captured with by a CCD camera-and a new hologram synthesized using commercially-available optical components or hologram-producing equipment. Therefore, a pattern that can be read by a conventional light source and a CCD camera can be reproduced. An optical security and anti-copying device that provides significant security improvements over existing security technology was demonstrated. The system can be applied for security verification of credit cards, passports, and other IDs so that they cannot easily be reproduced. We have used a new scheme of complex phase/amplitude patterns that cannot be seen and cannot be copied by an intensity-sensitive detector such as a CCD camera. A random phase mask is bonded to a primary identification pattern which could also be phase encoded. The pattern could be a fingerprint, a picture of a face, or a signature. The proposed optical processing device is designed to identify both the random phase mask and the

  18. Accuracy of noninvasive quantification of brain NAA concentrations using PRESS sequence: verification in a swine model with external standard.

    Science.gov (United States)

    Wu, R H; Lin, R; Li, H; Xiao, Z W; Rao, H B; Luo, W H; Guo, G; Huang, K; Zhang, X G; Lang, Z J

    2005-01-01

    The metabolite ratios had been employed in the field of MR spectroscopy (MRS) for a long period. The main drawback of metabolite ratio is that ratio results are not comparable with absolute metabolite concentration in vivo. The purpose of this study was to examine the accuracy of noninvasive quantification of brain N-acetylaspartate (NAA) concentrations using previously reported MR external standard method. Eight swine were scanned on a GE 1.5 T scanner with a standard head coil. The external standard method was utilized with a sphere filled with NAA, GABA, glutamine, glutamate, creatine, choline chloride, and myo-inositol. The position resolved spectroscopy (PRESS) sequence was used with TE=135 msec, TR=1500 msec, and 128 scan averages. The analysis of MRS was done with SAGE/IDL program. In vivo NAA concentration was obtained using the equation S=N * e(-TE/T2) * [1-e(-TR/T1). In vitro NAA concentration was measured by high performance liquid chromatography (HPLC). In the MRS group, the mean concentration of NAA was 10.03 plusmn 0.74 mmol/kg. In the HPLC group, the mean concentration of NAA was 9.22 plusmn 0.55 mmol/kg. There was no significant difference between the two groups (p = 0.46). However, slightly higher value was observed in the MRS group (7/8 swine), compared with HPLC group. The range of differences was between 0.02~2.05 mmol/kg. MRS external reference method could be more accurate than internal reference method. 1H MRS does not distinguish between N-acetyl resonance frequencies and other N-acetylated amino acids.

  19. A high accuracy land use/cover retrieval system

    Directory of Open Access Journals (Sweden)

    Alaa Hefnawy

    2012-03-01

    Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.

  20. High Accuracy Piezoelectric Kinemometer; Cinemometro piezoelectrico de alta exactitud (VUAE)

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez Martinez, F. J.; Frutos, J. de; Pastor, C.; Vazquez Rodriguez, M.

    2012-07-01

    We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n=2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables. (Author) 11 refs.

  1. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics

  2. Motivation, Critical Thinking and Academic Verification of High School Students' Information-seeking Behavior

    Directory of Open Access Journals (Sweden)

    Z Hidayat

    2017-06-01

    Full Text Available High school students have known as Gen Y or Z and their media using can be understand on their information-seeking behavior. This research’s purposes were: 1 to analyze the students’ motivation; 2 to analyze the critical thinking and academic verification; 3 to analyze the information-seeking behavior. This study used quantitative approach through survey among 1125 respondents in nine clusters, i.e. Central, East, North, West, and South of Jakarta, Tangerang, Bekasi, Depok, and Bogor. Schools sampling based on "the best schools rank" by the government, while respondents have taken by accidental in each school. Construct of questionnaire included measurement of motivation, critical thinking and academic verification, and the information-seeking behavior at all. The results showed that the motivations of the use of Internet were dominated by habit to interact and be entertained while on the academic needs are still relatively small but increasing significantly. Students’ self-efficacy, performance and achievement goals tend to be high motives, however the science learning value, and learning environment stimulation were average low motives. High school students indicated that they think critically about the various things that become content primarily in social media but less critical of the academic information subjects. Unfortunately, high school students did not conducted academic verification on the data and information but students tend to do plagiarism. Key words: Student motivation, critical thinking, academic verification, information-seeking behavior, digital generation.

  3. An angle encoder for super-high resolution and super-high accuracy using SelfA

    Science.gov (United States)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-06-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after

  4. An angle encoder for super-high resolution and super-high accuracy using SelfA

    International Nuclear Information System (INIS)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-01-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 2 21 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science and Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 2 33 , that is, corresponding to a 0.0015″ signal period

  5. High-accuracy mass spectrometry for fundamental studies.

    Science.gov (United States)

    Kluge, H-Jürgen

    2010-01-01

    Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.

  6. Read-only high accuracy volume holographic optical correlator

    Science.gov (United States)

    Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.

  7. Partial verification bias and incorporation bias affected accuracy estimates of diagnostic studies for biomarkers that were part of an existing composite gold standard.

    Science.gov (United States)

    Karch, Annika; Koch, Armin; Zapf, Antonia; Zerr, Inga; Karch, André

    2016-10-01

    To investigate how choice of gold standard biases estimates of sensitivity and specificity in studies reassessing the diagnostic accuracy of biomarkers that are already part of a lifetime composite gold standard (CGS). We performed a simulation study based on the real-life example of the biomarker "protein 14-3-3" used for diagnosing Creutzfeldt-Jakob disease. Three different types of gold standard were compared: perfect gold standard "autopsy" (available in a small fraction only; prone to partial verification bias), lifetime CGS (including the biomarker under investigation; prone to incorporation bias), and "best available" gold standard (autopsy if available, otherwise CGS). Sensitivity was unbiased when comparing 14-3-3 with autopsy but overestimated when using CGS or "best available" gold standard. Specificity of 14-3-3 was underestimated in scenarios comparing 14-3-3 with autopsy (up to 24%). In contrast, overestimation (up to 20%) was observed for specificity compared with CGS; this could be reduced to 0-10% when using the "best available" gold standard. Choice of gold standard affects considerably estimates of diagnostic accuracy. Using the "best available" gold standard (autopsy where available, otherwise CGS) leads to valid estimates of specificity, whereas sensitivity is estimated best when tested against autopsy alone. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Synchrotron accelerator technology for proton beam therapy with high accuracy

    International Nuclear Information System (INIS)

    Hiramoto, Kazuo

    2009-01-01

    Proton beam therapy was applied at the beginning to head and neck cancers, but it is now extended to prostate, lung and liver cancers. Thus the need for a pencil beam scanning method is increasing. With this method radiation dose concentration property of the proton beam will be further intensified. Hitachi group has supplied a pencil beam scanning therapy system as the first one for M. D. Anderson Hospital in United States, and it has been operational since May 2008. Hitachi group has been developing proton therapy system to correspond high-accuracy proton therapy to concentrate the dose in the diseased part which is located with various depths, and which sometimes has complicated shape. The author described here on the synchrotron accelerator technology that is an important element for constituting the proton therapy system. (K.Y.)

  9. High-Resolution Fast-Neutron Spectrometry for Arms Control and Treaty Verification

    Energy Technology Data Exchange (ETDEWEB)

    David L. Chichester; James T. Johnson; Edward H. Seabury

    2012-07-01

    Many nondestructive nuclear analysis techniques have been developed to support the measurement needs of arms control and treaty verification, including gross photon and neutron counting, low- and high-resolution gamma spectrometry, time-correlated neutron measurements, and photon and neutron imaging. One notable measurement technique that has not been extensively studied to date for these applications is high-resolution fast-neutron spectrometry (HRFNS). Applied for arms control and treaty verification, HRFNS has the potential to serve as a complimentary measurement approach to these other techniques by providing a means to either qualitatively or quantitatively determine the composition and thickness of non-nuclear materials surrounding neutron-emitting materials. The technique uses the normally-occurring neutrons present in arms control and treaty verification objects of interest as an internal source of neutrons for performing active-interrogation transmission measurements. Most low-Z nuclei of interest for arms control and treaty verification, including 9Be, 12C, 14N, and 16O, possess fast-neutron resonance features in their absorption cross sections in the 0.5- to 5-MeV energy range. Measuring the selective removal of source neutrons over this energy range, assuming for example a fission-spectrum starting distribution, may be used to estimate the stoichiometric composition of intervening materials between the neutron source and detector. At a simpler level, determination of the emitted fast-neutron spectrum may be used for fingerprinting 'known' assemblies for later use in template-matching tests. As with photon spectrometry, automated analysis of fast-neutron spectra may be performed to support decision making and reporting systems protected behind information barriers. This paper will report recent work at Idaho National Laboratory to explore the feasibility of using HRFNS for arms control and treaty verification applications, including simulations

  10. SU-F-J-129: Verification of Geometric and Dosimetric Accuracy of Respiratory Management Systems Using Homemade Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Goksel, E; Kucucuk, H; Senkesen, O [Acibadem Kozyatgi Hospital, Istanbul (Turkey); Tezcanli, E [Acibadem University, Istanbul (Turkey)

    2016-06-15

    Purpose: Different placements of Infrared Cameras (IRC) in CT and treatment rooms can cause gating window level (GWL) variations leading to differences between GWL used for planning and treatments. Although, Varian Clinac DHX-OBI sytem and CT are equipped with the same kind of IRC, Truebeam STx (TB) has a different type of IRC known as banana type. In this study; geometric and dosimetric accuracy of respiratory management system (RPM) for different machines were investigated with a special homemade phantom. Methods: Special phantom was placed on the respiratory simulator machine and a CT data set was obtained at the end of the expirium phase (EOE). Conformal and IMRT plans were generated on the EOE CT image series for both DHX-OBI and TB LINACs while a VMAT plan was generated only for TB.The acquired respiratory graphs in the CT were directly sent to DHX-OBI system, and they were converted with software before sending to TB. EBT3 films were placed inside the phantom and were irradiated using RPM system with two machines for different plans. Planar dose distributions were compared with gamma analysis (GA) method (3mm, %3) to evaluate planned-measured dose differences. In addition, radio-opac marker was placed in the center of the phantom to evaluate the geometric accuracy of treatment field with gated flouroscopy (GF). Results: There were no shifts detected between planning and treeatment GWL for both DHX-OBI and TB. Difference on the GF image between digital graticule and radio-opac marker was <1mm for TB and 1mm for DHX-OBI. Although, GA agreement was 97% for conformal and IMRT techniques in TB, it was 96% for VMAT technique. While GA agreement was 98% for conformal technique in DHX-OBI, IMRT was 95%.ConclusionThis study showed that RPM can be used accurately in spite of different IRC placements or different types of ICR used.

  11. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  12. Mass Spectrometry-based Assay for High Throughput and High Sensitivity Biomarker Verification

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Xuejiang; Tang, Keqi

    2017-06-14

    Searching for disease specific biomarkers has become a major undertaking in the biomedical research field as the effective diagnosis, prognosis and treatment of many complex human diseases are largely determined by the availability and the quality of the biomarkers. A successful biomarker as an indicator to a specific biological or pathological process is usually selected from a large group of candidates by a strict verification and validation process. To be clinically useful, the validated biomarkers must be detectable and quantifiable by the selected testing techniques in their related tissues or body fluids. Due to its easy accessibility, protein biomarkers would ideally be identified in blood plasma or serum. However, most disease related protein biomarkers in blood exist at very low concentrations (<1ng/mL) and are “masked” by many none significant species at orders of magnitude higher concentrations. The extreme requirements of measurement sensitivity, dynamic range and specificity make the method development extremely challenging. The current clinical protein biomarker measurement primarily relies on antibody based immunoassays, such as ELISA. Although the technique is sensitive and highly specific, the development of high quality protein antibody is both expensive and time consuming. The limited capability of assay multiplexing also makes the measurement an extremely low throughput one rendering it impractical when hundreds to thousands potential biomarkers need to be quantitatively measured across multiple samples. Mass spectrometry (MS)-based assays have recently shown to be a viable alternative for high throughput and quantitative candidate protein biomarker verification. Among them, the triple quadrupole MS based assay is the most promising one. When it is coupled with liquid chromatography (LC) separation and electrospray ionization (ESI) source, a triple quadrupole mass spectrometer operating in a special selected reaction monitoring (SRM) mode

  13. Verification of Accuracy of CyberKnife Tumor-tracking Radiation Therapy Using Patient-specific Lung Phantoms

    International Nuclear Information System (INIS)

    Jung, Jinhong; Song, Si Yeol; Yoon, Sang Min; Kwak, Jungwon; Yoon, KyoungJun; Choi, Wonsik; Jeong, Seong-Yun; Choi, Eun Kyung; Cho, Byungchul

    2015-01-01

    Purpose: To investigate the accuracy of the CyberKnife Xsight Lung Tracking System (XLTS) compared with that of a fiducial-based target tracking system (FTTS) using patient-specific lung phantoms. Methods and Materials: Three-dimensional printing technology was used to make individualized lung phantoms that closely mimicked the lung anatomy of actual patients. Based on planning computed tomographic data from 6 lung cancer patients who underwent stereotactic ablative radiation therapy using the CyberKnife, the volume above a certain Hounsfield unit (HU) was assigned as the structure to be filled uniformly with polylactic acid material by a 3-dimensional printer (3D Edison, Lokit, Korea). We evaluated the discrepancies between the measured and modeled target positions, representing the total tracking error, using 3 log files that were generated during each treatment for both the FTTS and the XLTS. We also analyzed the γ index between the film dose measured under the FTTS and XLTS. Results: The overall mean values and standard deviations of total tracking errors for the FTTS were 0.36 ± 0.39 mm, 0.15 ± 0.64 mm, and 0.15 ± 0.62 mm for the craniocaudal (CC), left–right (LR), and anteroposterior (AP) components, respectively. Those for the XLTS were 0.38 ± 0.54 mm, 0.13 ± 0.18 mm, and 0.14 ± 0.37 mm for the CC, LR, and AP components, respectively. The average of γ passing rates was 100% for the criteria of 3%, 3 mm; 99.6% for the criteria of 2%, 2 mm; and 86.8% for the criteria of 1%, 1 mm. Conclusions: The XLTS has segmentation accuracy comparable with that of the FTTS and small total tracking errors

  14. Verification of Accuracy of CyberKnife Tumor-tracking Radiation Therapy Using Patient-specific Lung Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jinhong [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Department of Radiation Oncology, Kyung Hee University Medical Center, Kyung Hee University School of Medicine, Seoul (Korea, Republic of); Song, Si Yeol, E-mail: coocoori@gmail.com [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Yoon, Sang Min; Kwak, Jungwon; Yoon, KyoungJun [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Wonsik [Department of Radiation Oncology, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung (Korea, Republic of); Jeong, Seong-Yun [Asan Institute for Life Science, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Eun Kyung; Cho, Byungchul [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of)

    2015-07-15

    Purpose: To investigate the accuracy of the CyberKnife Xsight Lung Tracking System (XLTS) compared with that of a fiducial-based target tracking system (FTTS) using patient-specific lung phantoms. Methods and Materials: Three-dimensional printing technology was used to make individualized lung phantoms that closely mimicked the lung anatomy of actual patients. Based on planning computed tomographic data from 6 lung cancer patients who underwent stereotactic ablative radiation therapy using the CyberKnife, the volume above a certain Hounsfield unit (HU) was assigned as the structure to be filled uniformly with polylactic acid material by a 3-dimensional printer (3D Edison, Lokit, Korea). We evaluated the discrepancies between the measured and modeled target positions, representing the total tracking error, using 3 log files that were generated during each treatment for both the FTTS and the XLTS. We also analyzed the γ index between the film dose measured under the FTTS and XLTS. Results: The overall mean values and standard deviations of total tracking errors for the FTTS were 0.36 ± 0.39 mm, 0.15 ± 0.64 mm, and 0.15 ± 0.62 mm for the craniocaudal (CC), left–right (LR), and anteroposterior (AP) components, respectively. Those for the XLTS were 0.38 ± 0.54 mm, 0.13 ± 0.18 mm, and 0.14 ± 0.37 mm for the CC, LR, and AP components, respectively. The average of γ passing rates was 100% for the criteria of 3%, 3 mm; 99.6% for the criteria of 2%, 2 mm; and 86.8% for the criteria of 1%, 1 mm. Conclusions: The XLTS has segmentation accuracy comparable with that of the FTTS and small total tracking errors.

  15. High accuracy magnetic field mapping of the LEP spectrometer magnet

    CERN Document Server

    Roncarolo, F

    2000-01-01

    The Large Electron Positron accelerator (LEP) is a storage ring which has been operated since 1989 at the European Laboratory for Particle Physics (CERN), located in the Geneva area. It is intended to experimentally verify the Standard Model theory and in particular to detect with high accuracy the mass of the electro-weak force bosons. Electrons and positrons are accelerated inside the LEP ring in opposite directions and forced to collide at four locations, once they reach an energy high enough for the experimental purposes. During head-to-head collisions the leptons loose all their energy and a huge amount of energy is concentrated in a small region. In this condition the energy is quickly converted in other particles which tend to go away from the interaction point. The higher the energy of the leptons before the collisions, the higher the mass of the particles that can escape. At LEP four large experimental detectors are accommodated. All detectors are multi purpose detectors covering a solid angle of alm...

  16. Motivation, Critical Thinking and Academic Verification of High School Students' Information-seeking Behavior

    Directory of Open Access Journals (Sweden)

    Z Hidayat

    2018-01-01

    Full Text Available High school students have known as Gen Y or Z and their media using can be understand on their information-seeking behavior. This research’s purposes were: 1 to analyze the students’ motivation; 2 to analyze the critical thinking and academic verification; 3 to analyze the information-seeking behavior. This study used quantitative approach through survey among 1125 respondents in nine clusters, i.e. Central, East, North, West, and South of Jakarta, Tangerang, Bekasi, Depok, and Bogor. Schools sampling based on "the best schools rank" by the government, while respondents have taken by accidental in each school. Construct of questionnaire included measurement of motivation, critical thinking and academic verification, and the information-seeking behavior at all. The results showed that the motivations of the use of Internet were dominated by habit to interact and be entertained while on the academic needs are still relatively small but increasing significantly. Students’ self-efficacy, performance and achievement goals tend to be high motives, however the science learning value, and learning environment stimulation were average low motives. High school students indicated that they think critically about the various things that become content primarily in social media but less critical of the academic information subjects. Unfortunately, high school students did not conducted academic verification on the data and information but students tend to do plagiarism.

  17. Accuracy assessment of high-rate GPS measurements for seismology

    Science.gov (United States)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  18. Accuracy assessment of cadastral maps using high resolution aerial photos

    Directory of Open Access Journals (Sweden)

    Alwan Imzahim

    2018-01-01

    Full Text Available A cadastral map is a map that shows the boundaries and ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. In Iraq / Baghdad Governorate, the main problem is that the cadastral maps are georeferenced to a local geodetic datum known as Clark 1880 while the widely used reference system for navigation purpose (GPS and GNSS and uses Word Geodetic System 1984 (WGS84 as a base reference datum. The objective of this paper is to produce a cadastral map with scale 1:500 (metric scale by using aerial photographs 2009 with high ground spatial resolution 10 cm reference WGS84 system. The accuracy assessment for the cadastral maps updating approach to urban large scale cadastral maps (1:500-1:1000 was ± 0.115 meters; which complies with the American Social for Photogrammetry and Remote Sensing Standards (ASPRS.

  19. Determination of UAV position using high accuracy navigation platform

    Directory of Open Access Journals (Sweden)

    Ireneusz Kubicki

    2016-07-01

    Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration

  20. Modified sine bar device measures small angles with high accuracy

    Science.gov (United States)

    Thekaekara, M.

    1968-01-01

    Modified sine bar device measures small angles with enough accuracy to calibrate precision optical autocollimators. The sine bar is a massive bar of steel supported by two cylindrical rods at one end and one at the other.

  1. Towards real-time VMAT verification using a prototype, high-speed CMOS active pixel sensor.

    Science.gov (United States)

    Zin, Hafiz M; Harris, Emma J; Osmond, John P F; Allinson, Nigel M; Evans, Philip M

    2013-05-21

    This work investigates the feasibility of using a prototype complementary metal oxide semiconductor active pixel sensor (CMOS APS) for real-time verification of volumetric modulated arc therapy (VMAT) treatment. The prototype CMOS APS used region of interest read out on the chip to allow fast imaging of up to 403.6 frames per second (f/s). The sensor was made larger (5.4 cm × 5.4 cm) using recent advances in photolithographic technique but retains fast imaging speed with the sensor's regional read out. There is a paradigm shift in radiotherapy treatment verification with the advent of advanced treatment techniques such as VMAT. This work has demonstrated that the APS can track multi leaf collimator (MLC) leaves moving at 18 mm s(-1) with an automatic edge tracking algorithm at accuracy better than 1.0 mm even at the fastest imaging speed. Evaluation of the measured fluence distribution for an example VMAT delivery sampled at 50.4 f/s was shown to agree well with the planned fluence distribution, with an average gamma pass rate of 96% at 3%/3 mm. The MLC leaves motion and linac pulse rate variation delivered throughout the VMAT treatment can also be measured. The results demonstrate the potential of CMOS APS technology as a real-time radiotherapy dosimeter for delivery of complex treatments such as VMAT.

  2. Online pretreatment verification of high-dose rate brachytherapy using an imaging panel

    Science.gov (United States)

    Fonseca, Gabriel P.; Podesta, Mark; Bellezzo, Murillo; Van den Bosch, Michiel R.; Lutgens, Ludy; Vanneste, Ben G. L.; Voncken, Robert; Van Limbergen, Evert J.; Reniers, Brigitte; Verhaegen, Frank

    2017-07-01

    Brachytherapy is employed to treat a wide variety of cancers. However, an accurate treatment verification method is currently not available. This study describes a pre-treatment verification system that uses an imaging panel (IP) to verify important aspects of the treatment plan. A detailed modelling of the IP was only possible with an extensive calibration performed using a robotic arm. Irradiations were performed with a high dose rate (HDR) 192Ir source within a water phantom. An empirical fit was applied to measure the distance between the source and the detector so 3D Cartesian coordinates of the dwell positions can be obtained using a single panel. The IP acquires 7.14 fps to verify the dwell times, dwell positions and air kerma strength (Sk). A gynecological applicator was used to create a treatment plan that was registered with a CT image of the water phantom used during the experiments for verification purposes. Errors (shifts, exchanged connections and wrong dwell times) were simulated to verify the proposed verification system. Cartesian source positions (panel measurement plane) have a standard deviation of about 0.02 cm. The measured distance between the source and the panel (z-coordinate) have a standard deviation up to 0.16 cm and maximum absolute error of  ≈0.6 cm if the signal is close to sensitive limit of the panel. The average response of the panel is very linear with Sk. Therefore, Sk measurements can be performed with relatively small errors. The measured dwell times show a maximum error of 0.2 s which is consistent with the acquisition rate of the panel. All simulated errors were clearly identified by the proposed system. The use of IPs is not common in brachytherapy, however, it provides considerable advantages. It was demonstrated that the IP can accurately measure Sk, dwell times and dwell positions.

  3. Measurement system with high accuracy for laser beam quality.

    Science.gov (United States)

    Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming

    2015-05-20

    Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.

  4. Verification of the plan dosimetry for high dose rate brachytherapy using metal-oxide-semiconductor field effect transistor detectors

    International Nuclear Information System (INIS)

    Qi Zhenyu; Deng Xiaowu; Huang Shaomin; Lu Jie; Lerch, Michael; Cutajar, Dean; Rosenfeld, Anatoly

    2007-01-01

    The feasibility of a recently designed metal-oxide-semiconductor field effect transistor (MOSFET) dosimetry system for dose verification of high dose rate (HDR) brachytherapy treatment planning was investigated. MOSFET detectors were calibrated with a 0.6 cm 3 NE-2571 Farmer-type ionization chamber in water. Key characteristics of the MOSFET detectors, such as the energy dependence, that will affect phantom measurements with HDR 192 Ir sources were measured. The MOSFET detector was then applied to verify the dosimetric accuracy of HDR brachytherapy treatments in a custom-made water phantom. Three MOSFET detectors were calibrated independently, with the calibration factors ranging from 0.187 to 0.215 cGy/mV. A distance dependent energy response was observed, significant within 2 cm from the source. The new MOSFET detector has a good reproducibility ( 2 =1). It was observed that the MOSFET detectors had a linear response to dose until the threshold voltage reached approximately 24 V for 192 Ir source measurements. Further comparison of phantom measurements using MOSFET detectors with dose calculations by a commercial treatment planning system for computed tomography-based brachytherapy treatment plans showed that the mean relative deviation was 2.2±0.2% for dose points 1 cm away from the source and 2.0±0.1% for dose points located 2 cm away. The percentage deviations between the measured doses and the planned doses were below 5% for all the measurements. The MOSFET detector, with its advantages of small physical size and ease of use, is a reliable tool for quality assurance of HDR brachytherapy. The phantom verification method described here is universal and can be applied to other HDR brachytherapy treatments

  5. Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients

    International Nuclear Information System (INIS)

    Iyengar, S.S.; Morgan-Hughes, G.; Ukoumunne, O.; Clayton, B.; Davies, E.J.; Nikolaou, V.; Hyde, C.J.; Shore, A.C.; Roobottom, C.A.

    2016-01-01

    Aim: To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Materials and methods: Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥50%) stenosis and secondarily as the presence of severe (≥70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. Results: No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. Conclusion: The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. - Highlights: • Diagnostic accuracy of High-Definition CT Angiography (HD-CTCA) has been assessed. • Invasive Coronary angiography (ICA) is the reference standard. • Diagnostic accuracy of HD-CTCA is comparable to ICA. • Diagnostic accuracy is not affected by coronary calcium or stents. • HD-CTCA provides a non-invasive alternative in high-risk patients.

  6. High-accuracy user identification using EEG biometrics.

    Science.gov (United States)

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  7. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  8. Methodology for GPS Synchronization Evaluation with High Accuracy

    OpenAIRE

    Li Zan; Braun Torsten; Dimitrova Desislava

    2015-01-01

    Clock synchronization in the order of nanoseconds is one of the critical factors for time based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper we are particularly interested in GPS based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Ou...

  9. Methodology for GPS Synchronization Evaluation with High Accuracy

    OpenAIRE

    Li, Zan; Braun, Torsten; Dimitrova, Desislava Cvetanova

    2015-01-01

    Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. O...

  10. Development and verification of a high performance multi-group SP3 transport capability in the ARTEMIS core simulator

    International Nuclear Information System (INIS)

    Van Geemert, Rene

    2008-01-01

    For satisfaction of future global customer needs, dedicated efforts are being coordinated internationally and pursued continuously at AREVA NP. The currently ongoing CONVERGENCE project is committed to the development of the ARCADIA R next generation core simulation software package. ARCADIA R will be put to global use by all AREVA NP business regions, for the entire spectrum of core design processes, licensing computations and safety studies. As part of the currently ongoing trend towards more sophisticated neutronics methodologies, an SP 3 nodal transport concept has been developed for ARTEMIS which is the steady-state and transient core simulation part of ARCADIA R . For enabling a high computational performance, the SP N calculations are accelerated by applying multi-level coarse mesh re-balancing. In the current implementation, SP 3 is about 1.4 times as expensive computationally as SP 1 (diffusion). The developed SP 3 solution concept is foreseen as the future computational workhorse for many-group 3D pin-by-pin full core computations by ARCADIA R . With the entire numerical workload being highly parallelizable through domain decomposition techniques, associated CPU-time requirements that adhere to the efficiency needs in the nuclear industry can be expected to become feasible in the near future. The accuracy enhancement obtainable by using SP 3 instead of SP 1 has been verified by a detailed comparison of ARTEMIS 16-group pin-by-pin SP N results with KAERI's DeCart reference results for the 2D pin-by-pin Purdue UO 2 /MOX benchmark. This article presents the accuracy enhancement verification and quantifies the achieved ARTEMIS-SP 3 computational performance for a number of 2D and 3D multi-group and multi-box (up to pin-by-pin) core computations. (authors)

  11. A New Approach for High Pressure Pixel Polar Distribution on Off-line Signature Verification

    Directory of Open Access Journals (Sweden)

    Jesús F. Vargas

    2010-06-01

    Full Text Available Features representing information of High Pressure Points froma static image of a handwritten signature are analyzed for an offline verification system. From grayscale images, a new approach for High Pressure threshold estimation is proposed. Two images, one containingthe High Pressure Points extracted and other with a binary version ofthe original signature, are transformed to polar coordinates where a pixel density ratio between them is calculated. Polar space had been divided into angular and radial segments, which permit a local analysis of the high pressure distribution. Finally two vectors containing the density distribution ratio are calculated for nearest and farthest points from geometric center of the original signature image. Experiments were carried out using a database containing signature from 160 individual. The robustness of the analyzed system for simple forgeries is tested out with Support Vector Machines models. For the sake of completeness, a comparison of the results obtained by the proposed approach with similar works published is presented.

  12. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    Science.gov (United States)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  13. Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's future science and exploratory missions will require much lighter, smaller, and longer life rate sensors that can provide high accuracy navigational...

  14. High Accuracy Positioning using Jet Thrusters for Quadcopter

    Directory of Open Access Journals (Sweden)

    Pi ChenHuan

    2018-01-01

    Full Text Available A quadcopter is equipped with four additional jet thrusters on its horizontal plane and vertical to each other in order to improve the maneuverability and positioning accuracy of quadcopter. A dynamic model of the quadcopter with jet thrusters is derived and two controllers are implemented in simulation, one is a dual loop state feedback controller for pose control and another is an auxiliary jet thruster controller for accurate positioning. Step response simulations showed that the jet thruster can control the quadcopter with less overshoot compared to the conventional one. Over 10s loiter simulation with disturbance, the quadcopter with jet thruster decrease 85% of RMS error of horizontal disturbance compared to a conventional quadcopter with only a dual loop state feedback controller. The jet thruster controller shows the possibility for further accurate in the field of quadcopter positioning.

  15. High-accuracy contouring using projection moiré

    Science.gov (United States)

    Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.

    2005-09-01

    Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.

  16. Verification of nuclear material balances: General theory and application to a highly enriched uranium fabrication plant

    International Nuclear Information System (INIS)

    Avenhaus, R.; Beedgen, R.; Neu, H.

    1980-08-01

    In the theoretical part it is shown that under the assumption, that in case of diversion the operator falsifies all data by a class specific amount, it is optimal in the sense of the probability of detection to use the difference MUF-D as the test statistics. However, as there are arguments for keeping the two tests separately, and furthermore, as it is not clear that the combined test statistics is optimal for any diversion strategy, the overall guaranteed probability of detection for the bivariate test is determined. A numerical example is given applying the theoretical part. Using the material balance data of a Highly Enriched Uranium fabrication plant the variances of MUF, D (no diversion) and MUF-D are calculated with the help of the standard deviations of operator and inspector measurements. The two inventories of the material balance are stratified. The samples sizes of the strata and the total inspection effort for data verification are determined by game theoretical methods (attribute sampling). On the basis of these results the overall detection probability of the combined system (data verification and material accountancy) is determined both for the MUF-D test and the bivariate (D,MUF) test as a function of the goal quantity. The results of both tests are evaluated for different diversion strategies. (orig./HP) [de

  17. Advanced Collimators for Verification of the Pu Isotopic Composition in Fresh Fuel by High Resolution Gamma Spectrometry

    International Nuclear Information System (INIS)

    Lebrun, Alain; Berlizov, Andriy

    2013-06-01

    IAEA verification of the nuclear material contained in fresh nuclear fuel assemblies is usually based on neutron coincidence counting (NCC). In the case of uranium fuel, active NCC provides the total content of uranium-235 per unit of length which, combined with active length verification, fully supports the verification. In the case of plutonium fuel, passive NCC provides the plutonium-240 equivalent content which needs to be associated with a measurement of the isotopic composition and active length measurement to complete the verification. Plutonium isotopic composition is verified by high resolution gamma spectrometry (HRGS) applied on fresh fuel assemblies assuming all fuel rods are fabricated from the same plutonium batch. For particular verifications when such an assumption cannot be reasonably made, there is a need to optimize the HRGS measurement so that contributions of internal rods to the recorded spectrum are maximized, thus providing equally strong verification of the internal fuel rods. This paper reports on simulation work carried out to design special collimators aimed at reducing the relative contribution of external fuel rods while enhancing the signal recorded from internal rods. Both cases of square lattices (e.g. 17x17 pressurized water reactor (PWR) fuel) and hexagonal compact lattices (e.g. BN800 fast neutron reactor (FNR) fuel) have been addressed. In the case of PWR lattices, the relatively large optical path to internal pins compensates for low plutonium concentrations and the large size of the fuel assemblies. A special collimator based on multiple, asymmetrical, vertical slots allows recording a spectrum from internal rods only when needed. In the FNR case, the triangular lattice is much more compact and the optical path to internal rods is very narrow. However, higher plutonium concentration and use of high energy ranges allow the verification of internal rods to be significantly strengthened. Encouraging results from the simulation

  18. Compact, High Accuracy CO2 Monitor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This Small Business Innovative Research Phase I proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...

  19. Compact, High Accuracy CO2 Monitor, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This Small Business Innovative Research Phase II proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...

  20. High-accuracy Subdaily ERPs from the IGS

    Science.gov (United States)

    Ray, J. R.; Griffiths, J.

    2012-04-01

    Since November 2000 the International GNSS Service (IGS) has published Ultra-rapid (IGU) products for near real-time (RT) and true real-time applications. They include satellite orbits and clocks, as well as Earth rotation parameters (ERPs) for a sliding 48-hr period. The first day of each update is based on the most recent GPS and GLONASS observational data from the IGS hourly tracking network. At the time of release, these observed products have an initial latency of 3 hr. The second day of each update consists of predictions. So the predictions between about 3 and 9 hr into the second half are relevant for true RT uses. Originally updated twice daily, the IGU products since April 2004 have been issued every 6 hr, at 3, 9, 15, and 21 UTC. Up to seven Analysis Centers (ACs) contribute to the IGU combinations. Two sets of ERPs are published with each IGU update, observed values at the middle epoch of the first half and predicted values at the middle epoch of the second half. The latency of the near RT ERPs is 15 hr while the predicted ERPs, based on projections of each AC's most recent determinations, are issued 9 hr ahead of their reference epoch. While IGU ERPs are issued every 6 hr, each set represents an integrated estimate over the surrounding 24 hr. So successive values are temporally correlated with about 75% of the data being common; this fact should be taken into account in user assimilations. To evaluate the accuracy of these near RT and predicted ERPs, they have been compared to the IGS Final ERPs, available about 11 to 17 d after data collection. The IGU products improved dramatically in the earlier years but since about 2008.0 the performance has been stable and excellent. During the last three years, RMS differences for the observed IGU ERPs have been about 0.036 mas and 0.0101 ms for each polar motion component and LOD respectively. (The internal precision of the reference IGS ERPs over the same period is about 0.016 mas for polar motion and 0

  1. Accuracy of Handheld Blood Glucose Meters at High Altitude

    NARCIS (Netherlands)

    de Mol, Pieter; Krabbe, Hans G.; de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.

    2010-01-01

    Background: Due to increasing numbers of people with diabetes taking part in extreme sports (e. g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior

  2. Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to develop a compact, highly innovative Inertial Reference/Measurement Unit (IRU/IMU) that pushes the state-of-the-art in high accuracy performance...

  3. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  4. Impact of a highly detailed emission inventory on modeling accuracy

    Science.gov (United States)

    Taghavi, M.; Cautenet, S.; Arteta, J.

    2005-03-01

    During Expérience sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions (ESCOMPTE) campaign (June 10 to July 14, 2001), two pollution events observed during an intensive measurement period (IOP2a and IOP2b) have been simulated. The comprehensive Regional Atmospheric Modeling Systems (RAMS) model, version 4.3, coupled online with a chemical module including 29 species is used to follow the chemistry of a polluted zone over Southern France. This online method takes advantage of a parallel code and use of the powerful computer SGI 3800. Runs are performed with two emission inventories: the Emission Pre Inventory (EPI) and the Main Emission Inventory (MEI). The latter is more recent and has a high resolution. The redistribution of simulated chemical species (ozone and nitrogen oxides) is compared with aircraft and surface station measurements for both runs at regional scale. We show that the MEI inventory is more efficient than the EPI in retrieving the redistribution of chemical species in space (three-dimensional) and time. In surface stations, MEI is superior especially for primary species, like nitrogen oxides. The ozone pollution peaks obtained from an inventory, such as EPI, have a large uncertainty. To understand the realistic geographical distribution of pollutants and to obtain a good order of magnitude in ozone concentration (in space and time), a high-resolution inventory like MEI is necessary. Coupling RAMS-Chemistry with MEI provides a very efficient tool able to simulate pollution plumes even in a region with complex circulations, such as the ESCOMPTE zone.

  5. Switched-capacitor techniques for high-accuracy filter and ADC design

    NARCIS (Netherlands)

    Quinn, P.J.; Roermund, van A.H.M.

    2007-01-01

    Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor

  6. High accuracy laboratory spectroscopy to support active greenhouse gas sensing

    Science.gov (United States)

    Long, D. A.; Bielska, K.; Cygan, A.; Havey, D. K.; Okumura, M.; Miller, C. E.; Lisak, D.; Hodges, J. T.

    2011-12-01

    Recent carbon dioxide (CO2) remote sensing missions have set precision targets as demanding as 0.25% (1 ppm) in order to elucidate carbon sources and sinks [1]. These ambitious measurement targets will require the most precise body of spectroscopic reference data ever assembled. Active sensing missions will be especially susceptible to subtle line shape effects as the narrow bandwidth of these measurements will greatly limit the number of spectral transitions which are employed in retrievals. In order to assist these remote sensing missions we have employed frequency-stabilized cavity ring-down spectroscopy (FS-CRDS) [2], a high-resolution, ultrasensitive laboratory technique, to measure precise line shape parameters for transitions of O2, CO2, and other atmospherically-relevant species within the near-infrared. These measurements have led to new HITRAN-style line lists for both 16O2 [3] and rare isotopologue [4] transitions in the A-band. In addition, we have performed detailed line shape studies of CO2 transitions near 1.6 μm under a variety of broadening conditions [5]. We will address recent measurements in these bands as well as highlight recent instrumental improvements to the FS-CRDS spectrometer. These improvements include the use of the Pound-Drever-Hall locking scheme, a high bandwidth servo which enables measurements to be made at rates greater than 10 kHz [6]. In addition, an optical frequency comb will be utilized as a frequency reference, which should allow for transition frequencies to be measured with uncertainties below 10 kHz (3×10-7 cm-1). [1] C. E. Miller, D. Crisp, P. L. DeCola, S. C. Olsen, et al., J. Geophys. Res.-Atmos. 112, D10314 (2007). [2] J. T. Hodges, H. P. Layer, W. W. Miller, G. E. Scace, Rev. Sci. Instrum. 75, 849-863 (2004). [3] D. A. Long, D. K. Havey, M. Okumura, C. E. Miller, et al., J. Quant. Spectrosc. Radiat. Transfer 111, 2021-2036 (2010). [4] D. A. Long, D. K. Havey, S. S. Yu, M. Okumura, et al., J. Quant. Spectrosc

  7. The accuracy of QCD perturbation theory at high energies

    CERN Document Server

    Dalla Brida, Mattia; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2016-01-01

    We discuss the determination of the strong coupling $\\alpha_\\mathrm{\\overline{MS}}^{}(m_\\mathrm{Z})$ or equivalently the QCD $\\Lambda$-parameter. Its determination requires the use of perturbation theory in $\\alpha_s(\\mu)$ in some scheme, $s$, and at some energy scale $\\mu$. The higher the scale $\\mu$ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the $\\Lambda$-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to $\\alpha_s = 0.1$ and below. We find that perturbation theory is very accurate there, yielding a three percent error in the $\\Lambda$-parameter, while data around $\\alpha_s \\approx 0.2$ is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  8. A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers

    Science.gov (United States)

    Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang

    1990-02-01

    In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.

  9. High-Precision Attitude Post-Processing and Initial Verification for the ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    Xinming Tang

    2014-12-01

    Full Text Available Attitude data, which is the important data strongly correlated with the geometric accuracy of optical remote sensing satellite images, are generally obtained using a real-time Extended Kalman Filter (EKF with star-tracker and gyro data for current high-resolution satellites, such as Orb-view, IKONOS, Quickbird,Pleiades, and ZY-3.We propose a forward-backward Unscented Kalman Filter (UKF for post-processing, and the proposed method employs UKF to suppress noise by using an unscented transformation (UT rather than an EKF in a nonlinear attitude system. Moreover, this method makes full use of the collected data in the fixed-interval and computational resources on the ground, and it determines optimal attitude results by forward-backward filtering and weighted smoothing with the raw star-tracker and gyro data collected for a fixed period. In this study, the principle and implementation of the proposed method are described. The post-processed attitude was compared with the on-board attitude, and the absolute accuracy was evaluated by the two methods. One method compares the positioning accuracy of the object space coordinates with the post-processed and on-board attitude data without using ground control points (GCPs. The other method compares the tie-point residuals of the image coordinates after a free net adjustment. In addition, the internal and external parameters of the camera were accurately calibrated before use for an objective evaluation of the attitude accuracy. The experimental results reveal that the accuracy of the post-processed attitude is superior to the accuracy of the on-board processed attitude. This method has been applied to the ZiYuan-3 satellite system for processing the raw star-tracker and gyro data daily.

  10. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  11. High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning

    International Nuclear Information System (INIS)

    Jeong, Hae Sun

    2010-02-01

    Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm 2 in muscle and bone, and less than 3 x 3 cm 2 in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm 2 ), 10% to 1% (0.7 x 0.7 cm 2 ), and 42% to 7% (0.5 x 0

  12. High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hae Sun

    2010-02-15

    Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm{sup 2} in muscle and bone, and less than 3 x 3 cm{sup 2} in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm{sup 2}), 10% to 1% (0.7 x 0.7 cm{sup 2}), and 42

  13. Development and Verification of Tritium Analyses Code for a Very High Temperature Reactor

    International Nuclear Information System (INIS)

    Oh, Chang H.; Kim, Eung S.

    2009-01-01

    A tritium permeation analyses code (TPAC) has been developed by Idaho National Laboratory for the purpose of analyzing tritium distributions in the VHTR systems including integrated hydrogen production systems. A MATLAB SIMULINK software package was used for development of the code. The TPAC is based on the mass balance equations of tritium-containing species and a various form of hydrogen (i.e., HT, H2, HTO, HTSO4, and TI) coupled with a variety of tritium source, sink, and permeation models. In the TPAC, ternary fission and neutron reactions with 6Li, 7Li 10B, 3He were taken into considerations as tritium sources. Purification and leakage models were implemented as main tritium sinks. Permeation of HT and H2 through pipes, vessels, and heat exchangers were importantly considered as main tritium transport paths. In addition, electrolyzer and isotope exchange models were developed for analyzing hydrogen production systems including both high-temperature electrolysis and sulfur-iodine process. The TPAC has unlimited flexibility for the system configurations, and provides easy drag-and-drops for making models by adopting a graphical user interface. Verification of the code has been performed by comparisons with the analytical solutions and the experimental data based on the Peach Bottom reactor design. The preliminary results calculated with a former tritium analyses code, THYTAN which was developed in Japan and adopted by Japan Atomic Energy Agency were also compared with the TPAC solutions. This report contains descriptions of the basic tritium pathways, theory, simple user guide, verifications, sensitivity studies, sample cases, and code tutorials. Tritium behaviors in a very high temperature reactor/high temperature steam electrolysis system have been analyzed by the TPAC based on the reference indirect parallel configuration proposed by Oh et al. (2007). This analysis showed that only 0.4% of tritium released from the core is transferred to the product hydrogen

  14. High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS

    OpenAIRE

    O.S.Kushnir; O.A.Bevz; O.G.Vlokh

    2000-01-01

    Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.

  15. Environmental Technology Verification Coatings and Coating Equipment Program (ETV CCEP). High Transfer Efficiency Spray Equipment - Generic Verification Protocol (Revision 0)

    Science.gov (United States)

    2006-09-30

    High-Pressure Waterjet • CO2 Pellet/Turbine Wheel • Ultrahigh-Pressure Waterjet 5 Process Water Reuse/Recycle • Cross-Flow Microfiltration ...documented on a process or laboratory form. Corrective action will involve taking all necessary steps to restore a measuring system to proper working order...In all cases, a nonconformance will be rectified before sample processing and analysis continues. If corrective action does not restore the

  16. Verification of the DUCT-III for calculation of high energy neutron streaming

    Energy Technology Data Exchange (ETDEWEB)

    Masukawa, Fumihiro; Nakano, Hideo; Nakashima, Hiroshi; Sasamoto, Nobuo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Tayama, Ryu-ichi; Handa, Hiroyuki; Hayashi, Katsumi [Hitachi Engineering Co., Ltd., Hitachi, Ibaraki (Japan); Hirayama, Hideo [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Shin, Kazuo [Kyoto Univ., Kyoto (Japan)

    2003-03-01

    A large number of radiation streaming calculations under a variety of conditions are required as a part of shielding design for a high energy proton accelerator facility. Since sophisticated methods are very time consuming, simplified methods are employed in many cases. For accuracy evaluation of a simplified code DUCT-III for high energy neutron streaming calculations, two kinds of benchmark problems based on the experiments were analyzed. Through comparison of the DUCT-III calculations with both the measurements and the sophisticated Monte Carlo calculations, DUCT-III was seen reliable enough for applying to the shielding design for the Intense Proton Accelerator Facility. (author)

  17. Verification of the DUCT-III for calculation of high energy neutron streaming

    CERN Document Server

    Masukawa, F; Hayashi, K; Hirayama, H; Nakano, H; Nakashima, H; Sasamoto, N; Shin, K; Tayama, R I

    2003-01-01

    A large number of radiation streaming calculations under a variety of conditions are required as a part of shielding design for a high energy proton accelerator facility. Since sophisticated methods are very time consuming, simplified methods are employed in many cases. For accuracy evaluation of a simplified code DUCT-III for high energy neutron streaming calculations, two kinds of benchmark problems based on the experiments were analyzed. Through comparison of the DUCT-III calculations with both the measurements and the sophisticated Monte Carlo calculations, DUCT-III was seen reliable enough for applying to the shielding design for the Intense Proton Accelerator Facility.

  18. High frequency electromagnetic impedance measurements for characterization, monitoring and verification efforts. 1998 annual progress report

    International Nuclear Information System (INIS)

    Becker, A.; Lee, K.H.; Pellerin, L.

    1998-01-01

    'Non-invasive, high-resolution imaging of the shallow subsurface is needed for delineation of buried waste, detection of unexploded ordinance, verification and monitoring of containment structures, and other environmental applications. Electromagnetic measurements at frequencies between 1 and 100 MHz are important for such applications, because the induction number of many targets is small due, and the ability to determine the dielectric permittivity in addition to electrical conductivity of the subsurface is possible. Earlier workers were successful in developing systems for detecting anomalous areas, but no quantifiable information was accurately determined. For high resolution imaging, accurate measurements are necessary so the field data can be mapped into the space of the subsurface parameters. The authors are developing a non-invasive method for accurately imaging the electrical conductivity and dielectric permittivity of the shallow subsurface using the plane wave impedance approach, known as the magnetotelluric (MT) method at low frequencies. Electric and magnetic sensors are being tested in a known area against theoretical predictions, thereby insuring that the data collected with the high-frequency impedance (HFI) system will support high-resolution, multi-dimensional imaging techniques. The summary of the work to date is divided into three sections: equipment procurement, instrumentation, and theoretical developments. For most earth materials, the frequency range from 1 to 100 MHz encompasses a very difficult transition zone between the wave propagation of displacement currents and the diffusive behavior of conduction currents. Test equipment, such as signal generators and amplifiers, does not cover the entire range except at great expense. Hence the authors have divided the range of investigation into three sub-ranges: 1--10 MHz, 10--30 MHz, and 30--100 MHz. Results to date are in the lowest frequency range of 1--10 MHz. Even though conduction currents

  19. The effect of pattern overlap on the accuracy of high resolution electron backscatter diffraction measurements

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2015-08-15

    High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.

  20. Independent verification of the delivered dose in High-Dose Rate (HDR) brachytherapy

    International Nuclear Information System (INIS)

    Portillo, P.; Feld, D.; Kessler, J.

    2009-01-01

    An important aspect of a Quality Assurance program in Clinical Dosimetry is an independent verification of the dosimetric calculation done by the Treatment Planning System for each radiation treatment. The present paper is aimed at creating a spreadsheet for the verification of the dose recorded at a point of an implant with radioactive sources and HDR in gynecological injuries. An 192 Ir source automatic differed loading equipment, GammaMedplus model, Varian Medical System with HDR installed at the Angel H. Roffo Oncology Institute has been used. The planning system implemented for getting the dose distribution is the BraquiVision. The sources coordinates as well as those of the calculation point (Rectum) are entered into the Excel-devised verification program by assuming the existence of a point source in each one of the applicators' positions. Such calculation point has been selected as the rectum is an organ at risk, therefore determining the treatment planning. The dose verification is performed at points standing at a sources distance having at least twice the active length of such sources, so they may be regarded as point sources. Most of the sources used in HDR brachytherapy with 192 Ir have a 5 mm active length for all equipment brands. Consequently, the dose verification distance must be at least of 10 mm. (author)

  1. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    Science.gov (United States)

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  2. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    Science.gov (United States)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  3. High-precision prostate cancer irradiation by clinical application of an offline patient setup verification procedure, using portal imaging

    International Nuclear Information System (INIS)

    Bel, Arjan; Vos, Pieter H.; Rodrigus, Patrick T. R.; Creutzberg, Carien L.; Visser, Andries G.; Stroom, Joep C.; Lebesque, Joos V.

    1996-01-01

    Purpose: To investigate in three institutions, The Netherlands Cancer Institute (Antoni van Leeuwenhoek Huis [AvL]), Dr. Daniel den Hoed Cancer Center (DDHC), and Dr, Bernard Verbeeten Institute (BVI), how much the patient setup accuracy for irradiation of prostate cancer can be improved by an offline setup verification and correction procedure, using portal imaging. Methods and Materials: The verification procedure consisted of two stages. During the first stage, setup deviations were measured during a number (N max ) of consecutive initial treatment sessions. The length of the average three dimensional (3D) setup deviation vector was compared with an action level for corrections, which shrunk with the number of setup measurements. After a correction was applied, N max measurements had to be performed again. Each institution chose different values for the initial action level (6, 9, and 10 mm) and N max (2 and 4). The choice of these parameters was based on a simulation of the procedure, using as input preestimated values of random and systematic deviations in each institution. During the second stage of the procedure, with weekly setup measurements, the AvL used a different criterion ('outlier detection') for corrective actions than the DDHC and the BVI ('sliding average'). After each correction the first stage of the procedure was restarted. The procedure was tested for 151 patients (62 in AvL, 47 in DDHC, and 42 in BVI) treated for prostate carcinoma. Treatment techniques and portal image acquisition and analysis were different in each institution. Results: The actual distributions of random and systematic deviations without corrections were estimated by eliminating the effect of the corrections. The percentage of mean (systematic) 3D deviations larger than 5 mm was 26% for the AvL and the DDHC, and 36% for the BVI. The setup accuracy after application of the procedure was considerably improved (percentage of mean 3D deviations larger than 5 mm was 1.6% in the

  4. Evaluation of Gafchromic EBT-XD film, with comparison to EBT3 film, and application in high dose radiotherapy verification

    Science.gov (United States)

    Palmer, Antony L.; Dimitriadis, Alexis; Nisbet, Andrew; Clark, Catharine H.

    2015-11-01

    There is renewed interest in film dosimetry for the verification of dose delivery of complex treatments, particularly small fields, compared to treatment planning system calculations. A new radiochromic film, Gafchromic EBT-XD, is available for high-dose treatment verification and we present the first published evaluation of its use. We evaluate the new film for MV photon dosimetry, including calibration curves, performance with single- and triple-channel dosimetry, and comparison to existing EBT3 film. In the verification of a typical 25 Gy stereotactic radiotherapy (SRS) treatment, compared to TPS planned dose distribution, excellent agreement was seen with EBT-XD using triple-channel dosimetry, in isodose overlay, maximum 1.0 mm difference over 200-2400 cGy, and gamma evaluation, mean passing rate 97% at 3% locally-normalised, 1.5 mm criteria. In comparison to EBT3, EBT-XD gave improved evaluation results for the SRS-plan, had improved calibration curve gradients at high doses, and had reduced lateral scanner effect. The dimensions of the two films are identical. The optical density of EBT-XD is lower than EBT3 for the same dose. The effective atomic number for both may be considered water-equivalent in MV radiotherapy. We have validated the use of EBT-XD for high-dose, small-field radiotherapy, for routine QC and a forthcoming multi-centre SRS dosimetry intercomparison.

  5. Evaluation of Gafchromic EBT-XD film, with comparison to EBT3 film, and application in high dose radiotherapy verification

    International Nuclear Information System (INIS)

    Palmer, Antony L; Dimitriadis, Alexis; Nisbet, Andrew; Clark, Catharine H

    2015-01-01

    There is renewed interest in film dosimetry for the verification of dose delivery of complex treatments, particularly small fields, compared to treatment planning system calculations. A new radiochromic film, Gafchromic EBT-XD, is available for high-dose treatment verification and we present the first published evaluation of its use. We evaluate the new film for MV photon dosimetry, including calibration curves, performance with single- and triple-channel dosimetry, and comparison to existing EBT3 film. In the verification of a typical 25 Gy stereotactic radiotherapy (SRS) treatment, compared to TPS planned dose distribution, excellent agreement was seen with EBT-XD using triple-channel dosimetry, in isodose overlay, maximum 1.0 mm difference over 200–2400 cGy, and gamma evaluation, mean passing rate 97% at 3% locally-normalised, 1.5 mm criteria. In comparison to EBT3, EBT-XD gave improved evaluation results for the SRS-plan, had improved calibration curve gradients at high doses, and had reduced lateral scanner effect. The dimensions of the two films are identical. The optical density of EBT-XD is lower than EBT3 for the same dose. The effective atomic number for both may be considered water-equivalent in MV radiotherapy. We have validated the use of EBT-XD for high-dose, small-field radiotherapy, for routine QC and a forthcoming multi-centre SRS dosimetry intercomparison. (paper)

  6. Development of dose delivery verification by PET imaging of photonuclear reactions following high energy photon therapy

    International Nuclear Information System (INIS)

    Janek, S; Svensson, R; Jonsson, C; Brahme, A

    2006-01-01

    A method for dose delivery monitoring after high energy photon therapy has been investigated based on positron emission tomography (PET). The technique is based on the activation of body tissues by high energy bremsstrahlung beams, preferably with energies well above 20 MeV, resulting primarily in 11 C and 15 O but also 13 N, all positron-emitting radionuclides produced by photoneutron reactions in the nuclei of 12 C, 16 O and 14 N. A PMMA phantom and animal tissue, a frozen hind leg of a pig, were irradiated to 10 Gy and the induced positron activity distributions were measured off-line in a PET camera a couple of minutes after irradiation. The accelerator used was a Racetrack Microtron at the Karolinska University Hospital using 50 MV scanned photon beams. From photonuclear cross-section data integrated over the 50 MV photon fluence spectrum the predicted PET signal was calculated and compared with experimental measurements. Since measured PET images change with time post irradiation, as a result of the different decay times of the radionuclides, the signals from activated 12 C, 16 O and 14 N within the irradiated volume could be separated from each other. Most information is obtained from the carbon and oxygen radionuclides which are the most abundant elements in soft tissue. The predicted and measured overall positron activities are almost equal (-3%) while the predicted activity originating from nitrogen is overestimated by almost a factor of two, possibly due to experimental noise. Based on the results obtained in this first feasibility study the great value of a combined radiotherapy-PET-CT unit is indicated in order to fully exploit the high activity signal from oxygen immediately after treatment and to avoid patient repositioning. With an RT-PET-CT unit a high signal could be collected even at a dose level of 2 Gy and the acquisition time for the PET could be reduced considerably. Real patient dose delivery verification by means of PET imaging seems to be

  7. Verification of high resolution simulation of precipitation and wind in Portugal

    Science.gov (United States)

    Menezes, Isilda; Pereira, Mário; Moreira, Demerval; Carvalheiro, Luís; Bugalho, Lourdes; Corte-Real, João

    2017-04-01

    Demand of energy and freshwater continues to grow as the global population and demands increase. Precipitation feed the freshwater ecosystems which provides a wealth of goods and services for society and river flow to sustain native species and natural ecosystem functions. The adoption of the wind and hydro-electric power supplies will sustain energy demands/services without restricting the economic growth and accelerated policies scenarios. However, the international meteorological observation network is not sufficiently dense to directly support high resolution climatic research. In this sense, coupled global and regional atmospheric models constitute the most appropriate physical and numerical tool for weather forecasting and downscaling in high resolution grids with the capacity to solve problems resulting from the lack of observed data and measuring errors. Thus, this study aims to calibrate and validate of the WRF regional model from precipitation and wind fields simulation, in high spatial resolution grid cover in Portugal. The simulations were performed in two-way nesting with three grids of increasing resolution (60 km, 20 km and 5 km) and the model performance assessed for the summer and winter months (January and July), using input variables from two different reanalyses and forecasted databases (ERA-Interim and NCEP-FNL) and different forcing schemes. The verification procedure included: (i) the use of several statistics error estimators, correlation based measures and relative errors descriptors; and, (ii) an observed dataset composed by time series of hourly precipitation, wind speed and direction provided by the Portuguese meteorological institute for a comprehensive set of weather stations. Main results suggested the good ability of the WRF to: (i) reproduce the spatial patterns of the mean and total observed fields; (ii) with relatively small values of bias and other errors; and, (iii) and good temporal correlation. These findings are in good

  8. System for verification in situ of current transformers in high voltage substations; Sistema para verificacao in situ de transformadores de corrente em substacoes de alta tensao

    Energy Technology Data Exchange (ETDEWEB)

    Mendonca, Pedro Henrique; Costa, Marcelo M. da; Dahlke, Diogo B.; Ikeda, Minoru [LACTEC - Instituto de Tecnologia para o Desenvolvimento, Curitiba, PR (Brazil)], Emails: pedro.henrique@lactec.org.br, arinos@lactec.org.br, diogo@lactec.org.br, minoru@lactec.org.br, Celso.melo@copel.com; Carvalho, Joao Claudio D. de [ELETRONORTE, Belem, PR (Brazil)], E-mail: marcelo.melo@eln.gov.br; Teixeira Junior, Jose Arinos [ELETROSUL, Florianopolis, SC (Brazil)], E-mail: jclaudio@eletrosul.gov.br; Melo, Celso F. [Companhia Paranaense de Energia (COPEL), Curitiba, PR (Brazil)], E-mail: Celso.melo@copel.com

    2009-07-01

    This work presents an alternative proposal to the execute the calibration of conventional current transformer at the field, using a verification system composed by a optical current transformer as a reference standard, able to installation in extra high voltage bars.

  9. Development and Performance Verification of the GANDALF High-Resolution Transient Recorder System

    CERN Document Server

    Bartknecht, Stefan; Herrmann, Florian; Königsmann, Kay; Lauser, Louis; Schill, Christian; Schopferer, Sebastian; Wollny, Heiner

    2011-01-01

    With present-day detectors in high energy physics one often faces fast analog pulses of a few nanoseconds length which cover large dynamic ranges. In many experiments both amplitude and timing information have to be measured with high accuracy. Additionally, the data rate per readout channel can reach several MHz, which leads to high demands on the separation of pile-up pulses. For an upgrade of the COMPASS experiment at CERN we have designed the GANDALF transient recorder with a resolution of 12bit@1GS/s and an analog bandwidth of 500\\:MHz. Signals are digitized with high precision and processed by fast algorithms to extract pulse arrival times and amplitudes in real-time and to generate trigger signals for the experiment. With up to 16 analog channels, deep memories and a high data rate interface, this 6U-VME64x/VXS module is not only a dead-time free digitization unit but also has huge numerical capabilities provided by the implementation of a Virtex5-SXT FPGA. Fast algorithms implemented in the FPGA may b...

  10. Learning a Genetic Measure for Kinship Verification Using Facial Images

    Directory of Open Access Journals (Sweden)

    Lu Kou

    2015-01-01

    Full Text Available Motivated by the key observation that children generally resemble their parents more than other persons with respect to facial appearance, distance metric (similarity learning has been the dominant choice for state-of-the-art kinship verification via facial images in the wild. Most existing learning-based approaches to kinship verification, however, are focused on learning a genetic similarity measure in a batch learning manner, leading to less scalability for practical applications with ever-growing amount of data. To address this, we propose a new kinship verification approach by learning a sparse similarity measure in an online fashion. Experimental results on the kinship datasets show that our approach is highly competitive to the state-of-the-art alternatives in terms of verification accuracy, yet it is superior in terms of scalability for practical applications.

  11. Accuracy of hiatal hernia detection with esophageal high-resolution manometry

    NARCIS (Netherlands)

    Weijenborg, P. W.; van Hoeij, F. B.; Smout, A. J. P. M.; Bredenoord, A. J.

    2015-01-01

    The diagnosis of a sliding hiatal hernia is classically made with endoscopy or barium esophagogram. Spatial separation of the lower esophageal sphincter (LES) and diaphragm, the hallmark of hiatal hernia, can also be observed on high-resolution manometry (HRM), but the diagnostic accuracy of this

  12. High-accuracy identification and bioinformatic analysis of in vivo protein phosphorylation sites in yeast

    DEFF Research Database (Denmark)

    Gnad, Florian; de Godoy, Lyris M F; Cox, Jürgen

    2009-01-01

    Protein phosphorylation is a fundamental regulatory mechanism that affects many cell signaling processes. Using high-accuracy MS and stable isotope labeling in cell culture-labeling, we provide a global view of the Saccharomyces cerevisiae phosphoproteome, containing 3620 phosphorylation sites ma...

  13. High accuracy positioning using carrier-phases with the opensource GPSTK software

    OpenAIRE

    Salazar Hernández, Dagoberto José; Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume

    2008-01-01

    The objective of this work is to show how using a proper GNSS data management strategy, combined with the flexibility provided by the open source "GPS Toolkit" (GPSTk), it is possible to easily develop both simple code-based processing strategies as well as basic high accuracy carrier-phase positioning techniques like Precise Point Positioning (PPP

  14. Very high-accuracy calibration of radiation pattern and gain of a near-field probe

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav

    2014-01-01

    In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ...

  15. From journal to headline: the accuracy of climate science news in Danish high quality newspapers

    DEFF Research Database (Denmark)

    Vestergård, Gunver Lystbæk

    2011-01-01

    analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...

  16. Technical Report on the Modification of 3-Dimensional Non-contact Human Body Laser Scanner for the Measurement of Anthropometric Dimensions: Verification of its Accuracy and Precision.

    Science.gov (United States)

    Jafari Roodbandi, Akram Sadat; Naderi, Hamid; Hashenmi-Nejad, Naser; Choobineh, Alireza; Baneshi, Mohammad Reza; Feyzi, Vafa

    2017-01-01

    Introduction: Three-dimensional (3D) scanners are widely used in medicine. One of the applications of 3D scanners is the acquisition of anthropometric dimensions for ergonomics and the creation of an anthropometry data bank. The aim of this study was to evaluate the precision and accuracy of a modified 3D scanner fabricated in this study. Methods: In this work, a 3D scan of the human body was obtained using DAVID Laser Scanner software and its calibration background, a linear low-power laser, and one advanced webcam. After the 3D scans were imported to the Geomagic software, 10 anthropometric dimensions of 10 subjects were obtained. The measurements of the 3D scanner were compared to the measurements of the same dimensions by a direct anthropometric method. The precision and accuracy of the measurements of the 3D scanner were then evaluated. The obtained data were analyzed using an independent sample t test with the SPSS software. Results: The minimum and maximum measurement differences from three consecutive scans by the 3D scanner were 0.03 mm and 18 mm, respectively. The differences between the measurements by the direct anthropometry method and the 3D scanner were not statistically significant. Therefore, the accuracy of the 3D scanner is acceptable. Conclusion: Future studies will need to focus on the improvement of the scanning speed and the quality of the scanned image.

  17. Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel

    Science.gov (United States)

    Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie

    2011-05-01

    Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.

  18. Verification of High Temperature Free Atom Thermal Scattering in MERCURY Compared to TART

    International Nuclear Information System (INIS)

    Cullen, D E; McKinley, S; Hagmann, C

    2006-01-01

    This is part of a series of reports verifying the accuracy of the relatively new MERCURY [1] Monte Carlo particle transport code by comparing its results to those of the older TART [2] Monte Carlo particle transport code. In the future we hope to extend these comparisons to include deterministic (Sn) codes [3]. Here we verify the accuracy of the free atom thermal scattering model [4] by using it over a very large temperature range. We would like to be able to use these Monte Carlo codes for astrophysical applications, where the temperature of the medium can be extremely high compared to the temperatures we normally encounter in our terrestrial applications [5]. The temperature is so high that is it often defined in eV rather than Kelvin. For a correspondence between the two scale 293.6 Kelvin (room temperature) corresponds to 0.0253 eV ∼ 1/40 eV. So that 1 eV temperature is about 12,000 Kelvin, and 1 keV temperature is about 12 million Kelvin. Here we use a relatively small system measured in cm, but by using ρR scaling [6] our results are equally applicable to systems measured in Km or thousands of Km or any size that we need for astrophysical applications. The emphasis here is not on modeling any given real system, but rather in verifying the accuracy of the free atom model to represent theoretical results over a large temperature range. There are two primary objectives of this report: (1) Verify agreement between MERCURY and TART results, both using continuous energy cross sections. In particular we want to verify the free atom scattering treatment in MERCURY as used over an extended temperature range; by comparison to many other codes for TART this has already been verified over many years [4, 7]. (2) Demonstrate that this agreement depends on using continuous energy cross sections. To demonstrate this we also present TART using the Multi-Band method [8, 9], which accounts for resonance self-shielding, and Multi-Group method, without self-shielding [9

  19. High accuracy interface characterization of three phase material systems in three dimensions

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus

    2010-01-01

    Quantification of interface properties such as two phase boundary area and triple phase boundary length is important in the characterization ofmanymaterial microstructures, in particular for solid oxide fuel cell electrodes. Three-dimensional images of these microstructures can be obtained...... by tomography schemes such as focused ion beam serial sectioning or micro-computed tomography. We present a high accuracy method of calculating two phase surface areas and triple phase length of triple phase systems from subvoxel accuracy segmentations of constituent phases. The method performs a three phase...... polygonization of the interface boundaries which results in a non-manifold mesh of connected faces. We show how the triple phase boundaries can be extracted as connected curve loops without branches. The accuracy of the method is analyzed by calculations on geometrical primitives...

  20. Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation

    Science.gov (United States)

    Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter

    1996-01-01

    The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.

  1. SU-F-T-406: Verification of Total Body Irradiation Commissioned MU Lookup Table Accuracy Using Treatment Planning System for Wide Range of Patient Sizes

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D; Chi, P; Tailor, R; Aristophanous, M; Tung, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: To verify the accuracy of total body irradiation (TBI) measurement commissioning data using the treatment planning system (TPS) for a wide range of patient separations. Methods: Our institution conducts TBI treatments with an 18MV photon beam at 380cm extended SSD using an AP/PA technique. Currently, the monitor units (MU) per field for patient treatments are determined using a lookup table generated from TMR measurements in a water phantom (75 × 41 × 30.5 cm3). The dose prescribed to an umbilicus midline point at spine level is determined based on patient separation, dose/ field and dose rate/MU. One-dimensional heterogeneous dose calculations from Pinnacle TPS were validated with thermoluminescent dosimeters (TLD) placed in an average adult anthropomorphic phantom and also in-vivo on four patients with large separations. Subsequently, twelve patients with various separations (17–47cm) were retrospectively analyzed. Computed tomography (CT) scans were acquired in the left and right decubitus positions from vertex to knee. A treatment plan for each patient was generated. The ratio of the lookup table MU to the heterogeneous TPS MU was compared. Results: TLD Measurements in the anthropomorphic phantom and large TBI patients agreed with Pinnacle calculated dose within 2.8% and 2%, respectively. The heterogeneous calculation compared to the lookup table agreed within 8.1% (ratio range: 1.014–1.081). A trend of reduced accuracy was observed when patient separation increases. Conclusion: The TPS dose calculation accuracy was confirmed by TLD measurements, showing that Pinnacle can model the extended SSD dose without commissioning a special beam model for the extended SSD geometry. The difference between the lookup table and TPS calculation potentially comes from lack of scatter during commissioning when compared to extreme patient sizes. The observed trend suggests the need for development of a correction factor between the lookup table and TPS dose

  2. Accuracy Verification of Magnetic Resonance Imaging (MRI) Technology for Lower-Limb Prosthetic Research: Utilising Animal Soft Tissue Specimen and Common Socket Casting Materials

    OpenAIRE

    Safari, Mohammad Reza; Rowe, Philip; Buis, Arjan

    2012-01-01

    Lower limb prosthetic socket shape and volume consistency can be quantified using MRI technology. Additionally, MRI images of the residual limb could be used as an input data for CAD-CAM technology and finite element studies. However, the accuracy of MRI when socket casting materials are used has to be defined. A number of six, 46 mm thick, cross-sections of an animal leg were used. Three specimens were wrapped with Plaster of Paris (POP) and the other three with commercially available silico...

  3. Final Report Independent Verification Survey of the High Flux Beam Reactor, Building 802 Fan House Brookhaven National Laboratory Upton, New York

    Energy Technology Data Exchange (ETDEWEB)

    Harpeneau, Evan M. [Oak Ridge Institute for Science and Education, Oak Ridge, TN (United States). Independent Environmental Assessment and Verification Program

    2011-06-24

    On May 9, 2011, ORISE conducted verification survey activities including scans, sampling, and the collection of smears of the remaining soils and off-gas pipe associated with the 802 Fan House within the HFBR (High Flux Beam Reactor) Complex at BNL. ORISE is of the opinion, based on independent scan and sample results obtained during verification activities at the HFBR 802 Fan House, that the FSS (final status survey) unit meets the applicable site cleanup objectives established for as left radiological conditions.

  4. High Accuracy Acoustic Relative Humidity Measurement inDuct Flow with Air

    Directory of Open Access Journals (Sweden)

    Cees van der Geld

    2010-08-01

    Full Text Available An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  5. High accuracy digital aging monitor based on PLL-VCO circuit

    International Nuclear Information System (INIS)

    Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong

    2015-01-01

    As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)

  6. High accuracy acoustic relative humidity measurement in duct flow with air.

    Science.gov (United States)

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  7. A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.

    Science.gov (United States)

    Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G

    2017-02-01

    The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  8. ENVIRONMENTAL TECHNOLOGY VERIFICATION, TEST REPORT OF CONTROL OF BIOAEROSOLS IN HVAC SYSTEMS, COLUMBUS INDUSTRIES HIGH EFFICIENCY MINI PLEAT

    Science.gov (United States)

    The U.S. Environmental Protection Agency (EPA) has created the Environmental Technology Verification (ETV) Program to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the...

  9. Verification of dosimetric commissioning accuracy of intensity modulated radiation therapy and volumetric modulated arc therapy delivery using task Group-119 guidelines

    Directory of Open Access Journals (Sweden)

    Karunakaran Kaviarasu

    2017-01-01

    Full Text Available Aim: The purpose of this study is to verify the accuracy of the commissioning of intensity-modulated radiation therapy (IMRT and volumetric-modulated arc therapy (VMAT based on the recommendation of the American Association of Physicists in Medicine Task Group 119 (TG-119. Materials and Methods: TG-119 proposes a set of clinical test cases to verify the accuracy of IMRT planning and delivery system. For these test cases, we generated two sets of treatment plans, the first plan using 7–9 IMRT fields and a second plan utilizing two-arc VMAT technique for both 6 MV and 15 MV photon beams. The template plans of TG-119 were optimized and calculated by Varian Eclipse Treatment Planning System (version 13.5. Dose prescription and planning objectives were set according to the TG-119 goals. The point dose (mean dose to the contoured chamber volume at the specified positions/locations was measured using compact (CC-13 ion chamber. The composite planar dose was measured with IMatriXX Evaluation 2D array with OmniPro IMRT Software (version 1.7b. The per-field relative gamma was measured using electronic portal imaging device in a way similar to the routine pretreatment patient-specific quality assurance. Results: Our planning results are compared with the TG-119 data. Point dose and fluence comparison data where within the acceptable confident limit. Conclusion: From the obtained data in this study, we conclude that the commissioning of IMRT and VMAT delivery were found within the limits of TG-119.

  10. Verification of Dosimetric Commissioning Accuracy of Intensity Modulated Radiation Therapy and Volumetric Modulated Arc Therapy Delivery using Task Group-119 Guidelines.

    Science.gov (United States)

    Kaviarasu, Karunakaran; Nambi Raj, N Arunai; Hamid, Misba; Giri Babu, A Ananda; Sreenivas, Lingampally; Murthy, Kammari Krishna

    2017-01-01

    The purpose of this study is to verify the accuracy of the commissioning of intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) based on the recommendation of the American Association of Physicists in Medicine Task Group 119 (TG-119). TG-119 proposes a set of clinical test cases to verify the accuracy of IMRT planning and delivery system. For these test cases, we generated two sets of treatment plans, the first plan using 7-9 IMRT fields and a second plan utilizing two-arc VMAT technique for both 6 MV and 15 MV photon beams. The template plans of TG-119 were optimized and calculated by Varian Eclipse Treatment Planning System (version 13.5). Dose prescription and planning objectives were set according to the TG-119 goals. The point dose (mean dose to the contoured chamber volume) at the specified positions/locations was measured using compact (CC-13) ion chamber. The composite planar dose was measured with IMatriXX Evaluation 2D array with OmniPro IMRT Software (version 1.7b). The per-field relative gamma was measured using electronic portal imaging device in a way similar to the routine pretreatment patient-specific quality assurance. Our planning results are compared with the TG-119 data. Point dose and fluence comparison data where within the acceptable confident limit. From the obtained data in this study, we conclude that the commissioning of IMRT and VMAT delivery were found within the limits of TG-119.

  11. Verification of statistical method CORN for modeling of microfuel in the case of high grain concentration

    Energy Technology Data Exchange (ETDEWEB)

    Chukbar, B. K., E-mail: bchukbar@mail.ru [National Research Center Kurchatov Institute (Russian Federation)

    2015-12-15

    Two methods of modeling a double-heterogeneity fuel are studied: the deterministic positioning and the statistical method CORN of the MCU software package. The effect of distribution of microfuel in a pebble bed on the calculation results is studied. The results of verification of the statistical method CORN for the cases of the microfuel concentration up to 170 cm{sup –3} in a pebble bed are presented. The admissibility of homogenization of the microfuel coating with the graphite matrix is studied. The dependence of the reactivity on the relative location of fuel and graphite spheres in a pebble bed is found.

  12. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  13. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Science.gov (United States)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  14. High-Accuracy Spherical Near-Field Measurements for Satellite Antenna Testing

    DEFF Research Database (Denmark)

    Breinbjerg, Olav

    2017-01-01

    The spherical near-field antenna measurement technique is unique in combining several distinct advantages and it generally constitutes the most accurate technique for experimental characterization of radiation from antennas. From the outset in 1970, spherical near-field antenna measurements have...... matured into a well-established technique that is widely used for testing antennas for many wireless applications. In particular, for high-accuracy applications, such as remote sensing satellite missions in ESA's Earth Observation Programme with uncertainty requirements at the level of 0.05dB - 0.10d......B, the spherical near-field antenna measurement technique is generally superior. This paper addresses the means to achieving high measurement accuracy; these include the measurement technique per se, its implementation in terms of proper measurement procedures, the use of uncertainty estimates, as well as facility...

  15. A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2011-12-01

    Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.

  16. Verification and quality control of routine hematology analyzers

    NARCIS (Netherlands)

    Vis, J Y; Huisman, A

    2016-01-01

    Verification of hematology analyzers (automated blood cell counters) is mandatory before new hematology analyzers may be used in routine clinical care. The verification process consists of several items which comprise among others: precision, accuracy, comparability, carryover, background and

  17. Identification and delineation of areas flood hazard using high accuracy of DEM data

    Science.gov (United States)

    Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.

    2018-05-01

    Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.

  18. Accuracy of Estimating Highly Eccentric Binary Black Hole Parameters with Gravitational-wave Detections

    Science.gov (United States)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-03-01

    Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).

  19. A phantom for verification of dwell position and time of a high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Madebo, M.; Kron, T.; Pillainayagam, J.; Franich, R.

    2012-01-01

    Accuracy of dwell position and reproducibility of dwell time are critical in high dose rate (HDR) brachytherapy. A phantom was designed to verify dwell position and dwell time reproducibility for an Ir-192 HDR stepping source using Computed Radiography (CR). The central part of the phantom, incorporating thin alternating strips of lead and acrylic, was used to measure dwell positions. The outer part of the phantom features recesses containing different absorber materials (lead, aluminium, acrylic and polystyrene foam), and was used for determining reproducibility of dwell times. Dwell position errors of <1 mm were easily detectable using the phantom. The effect of bending a transfer tube was studied with this phantom and no change of clinical significance was observed when varying the curvature of the transfer tube in typical clinical scenarios. Changes of dwell time as low as 0.1 s, the minimum dwell time of the treatment unit, could be detected by choosing dwell times over the four materials that produce identical exposure at the CR detector.

  20. A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System

    Directory of Open Access Journals (Sweden)

    Guanwu Zhou

    2014-07-01

    Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.

  1. Accuracy verification of magnetic resonance imaging (MRI) technology for lower-limb prosthetic research: utilising animal soft tissue specimen and common socket casting materials.

    Science.gov (United States)

    Safari, Mohammad Reza; Rowe, Philip; Buis, Arjan

    2012-01-01

    Lower limb prosthetic socket shape and volume consistency can be quantified using MRI technology. Additionally, MRI images of the residual limb could be used as an input data for CAD-CAM technology and finite element studies. However, the accuracy of MRI when socket casting materials are used has to be defined. A number of six, 46 mm thick, cross-sections of an animal leg were used. Three specimens were wrapped with Plaster of Paris (POP) and the other three with commercially available silicone interface liner. Data was obtained by utilising MRI technology and then the segmented images compared to corresponding calliper measurement, photographic imaging, and water suspension techniques. The MRI measurement results were strongly correlated with actual diameter, surface area, and volume measurements. The results show that the selected scanning parameters and the semiautomatic segmentation method are adequate enough, considering the limit of clinical meaningful shape and volume fluctuation, for residual limb volume and the cross-sectional surface area measurements.

  2. Accuracy Verification of Magnetic Resonance Imaging (MRI Technology for Lower-Limb Prosthetic Research: Utilising Animal Soft Tissue Specimen and Common Socket Casting Materials

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Safari

    2012-01-01

    Full Text Available Lower limb prosthetic socket shape and volume consistency can be quantified using MRI technology. Additionally, MRI images of the residual limb could be used as an input data for CAD-CAM technology and finite element studies. However, the accuracy of MRI when socket casting materials are used has to be defined. A number of six, 46 mm thick, cross-sections of an animal leg were used. Three specimens were wrapped with Plaster of Paris (POP and the other three with commercially available silicone interface liner. Data was obtained by utilising MRI technology and then the segmented images compared to corresponding calliper measurement, photographic imaging, and water suspension techniques. The MRI measurement results were strongly correlated with actual diameter, surface area, and volume measurements. The results show that the selected scanning parameters and the semiautomatic segmentation method are adequate enough, considering the limit of clinical meaningful shape and volume fluctuation, for residual limb volume and the cross-sectional surface area measurements.

  3. Verification of the precision and accuracy of the TPS dosimetric calculations using the 3.0 version MIRS TECDOC-1583 of the IAEA (2008)

    International Nuclear Information System (INIS)

    Gonzalez Perez, Yelina; Rodriguez Zayas, Michael; Perez Guevara, Adrian; Sola Rodriguez, Yeline; Reyes Gonzalez, Tommy; Sanchez Zamora; Luis; Caballero, Roberto

    2009-01-01

    Radiotherapy is one of the basic therapeutic tools to treat malignant tumors. The treatment of a tumor with ionizing radiation is a continuous process with distinct stages, which is the computerized planning, being a fundamental component within this process, it is in this phase are designed and calculated the patient treatment. Systems for Radiotherapy Treatment Planning (TPS) are the tools to perform the treatment planning. The Radiotherapy Service of the Hospital Hermanos Ameijeiras acquired MIRS software version 3.0, which has potential as conventional radiation therapy planning tools and compliance with 3D, multiple imaging studies and calculation of dose according to patient data and equipment. For the complexity of these calculations and the reliability which must have the same, the software should be subjected to rigorous acceptance testing. We verify the precision and accuracy of the TPS dosimetric calculation by applying the most recent protocol of acceptance of the IAEA for these systems (2008). After implementation of the testing set is unable to verify the dose calculations are within the tolerances allowed. (Author)

  4. Plutonium characterisation with prompt high energy gamma-rays from (n,gamma) reactions for nuclear warhead dismantlement verification

    Energy Technology Data Exchange (ETDEWEB)

    Postelt, Frederik; Gerald, Kirchner [Carl Friedrich von Weizsaecker-Centre for Science and Peace Research, Hamburg (Germany)

    2015-07-01

    Measurements of neutron induced gammas allow the characterisation of fissile material (i.e. plutonium and uranium), despite self- and additional shielding. Most prompt gamma-rays from radiative neutron capture reactions in fissile material have energies between 3 and 6.5 MeV. Such high energy photons have a high penetrability and therefore minimise shielding and self-absorption effects. They are also isotope specific and therefore well suited to determine the isotopic composition of fissile material. As they are non-destructive, their application in dismantlement verification is desirable. Disadvantages are low detector efficiencies at high gamma energies, as well as a high background of gammas which result from induced fission reactions in the fissile material, as well as delayed gammas from both, (n,f) and(n,gamma) reactions. In this talk, simulations of (n,gamma) measurements and their implications are presented. Their potential for characterising fissile material is assessed and open questions are addressed.

  5. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    Science.gov (United States)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  6. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  7. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  8. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  9. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  10. Nuclear test ban verification

    International Nuclear Information System (INIS)

    Chun, Kin-Yip

    1991-07-01

    This report describes verification and its rationale, the basic tasks of seismic verification, the physical basis for earthquake/explosion source discrimination and explosion yield determination, the technical problems pertaining to seismic monitoring of underground nuclear tests, the basic problem-solving strategy deployed by the forensic seismology resarch team at the University of Toronto, and the scientific significance of the team's research. The research carried out at the Univeristy of Toronto has two components: teleseismic verification using P wave recordings from the Yellowknife Seismic Array (YKA), and regional (close-in) verification using high-frequency L g and P n recordings from the Eastern Canada Telemetered Network. Major differences have been found in P was attenuation among the propagation paths connecting the YKA listening post with seven active nuclear explosion testing areas in the world. Significant revisions have been made to previously published P wave attenuation results, leading to more interpretable nuclear explosion source functions. (11 refs., 12 figs.)

  11. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J [Dept. of Radiation Oncology, Konkuk University Medical Center, Seoul (Korea, Republic of); Chung, J [Dept. of Radiation Oncology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2015-06-15

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy.

  12. SU-E-T-802: Verification of Implanted Cardiac Pacemaker Doses in Intensity-Modulated Radiation Therapy: Dose Prediction Accuracy and Reduction Effect of a Lead Sheet

    International Nuclear Information System (INIS)

    Lee, J; Chung, J

    2015-01-01

    Purpose: To verify delivered doses on the implanted cardiac pacemaker, predicted doses with and without dose reduction method were verified using the MOSFET detectors in terms of beam delivery and dose calculation techniques in intensity-modulated radiation therapy (IMRT). Methods: The pacemaker doses for a patient with a tongue cancer were predicted according to the beam delivery methods [step-and-shoot (SS) and sliding window (SW)], intensity levels for dose optimization, and dose calculation algorithms. Dosimetric effects on the pacemaker were calculated three dose engines: pencil-beam convolution (PBC), analytical anisotropic algorithm (AAA), and Acuros-XB. A lead shield of 2 mm thickness was designed for minimizing irradiated doses to the pacemaker. Dose variations affected by the heterogeneous material properties of the pacemaker and effectiveness of the lead shield were predicted by the Acuros-XB. Dose prediction accuracy and the feasibility of the dose reduction strategy were verified based on the measured skin doses right above the pacemaker using mosfet detectors during the radiation treatment. Results: The Acuros-XB showed underestimated skin doses and overestimated doses by the lead-shield effect, even though the lower dose disagreement was observed. It led to improved dose prediction with higher intensity level of dose optimization in IMRT. The dedicated tertiary lead sheet effectively achieved reduction of pacemaker dose up to 60%. Conclusion: The current SS technique could deliver lower scattered doses than recommendation criteria, however, use of the lead sheet contributed to reduce scattered doses.Thin lead plate can be a useful tertiary shielder and it could not acuse malfunction or electrical damage of the implanted pacemaker in IMRT. It is required to estimate more accurate scattered doses of the patient with medical device to design proper dose reduction strategy

  13. An EPID-based method for comprehensive verification of gantry, EPID and the MLC carriage positional accuracy in Varian linacs during arc treatments

    International Nuclear Information System (INIS)

    Rowshanfarzad, Pejman; McGarry, Conor K; Barnes, Michael P; Sabet, Mahsheed; Ebert, Martin A

    2014-01-01

    In modern radiotherapy, it is crucial to monitor the performance of all linac components including gantry, collimation system and electronic portal imaging device (EPID) during arc deliveries. In this study, a simple EPID-based measurement method has been introduced in conjunction with an algorithm to investigate the stability of these systems during arc treatments with the aim of ensuring the accuracy of linac mechanical performance. The Varian EPID sag, gantry sag, changes in source-to-detector distance (SDD), EPID and collimator skewness, EPID tilt, and the sag in MLC carriages as a result of linac rotation were separately investigated by acquisition of EPID images of a simple phantom comprised of 5 ball-bearings during arc delivery. A fast and robust software package was developed for automated analysis of image data. Twelve Varian linacs of different models were investigated. The average EPID sag was within 1 mm for all tested linacs. All machines showed less than 1 mm gantry sag. Changes in SDD values were within 1.7 mm except for three linacs of one centre which were within 9 mm. Values of EPID skewness and tilt were negligible in all tested linacs. The maximum sag in MLC leaf bank assemblies was around 1 mm. The EPID sag showed a considerable improvement in TrueBeam linacs. The methodology and software developed in this study provide a simple tool for effective investigation of the behaviour of linac components with gantry rotation. It is reproducible and accurate and can be easily performed as a routine test in clinics

  14. High-accuracy determination of the neutron flux at n{sub T}OF

    Energy Technology Data Exchange (ETDEWEB)

    Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)

    2013-12-15

    The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)

  15. High Accuracy Attitude Control System Design for Satellite with Flexible Appendages

    Directory of Open Access Journals (Sweden)

    Wenya Zhou

    2014-01-01

    Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.

  16. High-accuracy numerical integration of charged particle motion – with application to ponderomotive force

    International Nuclear Information System (INIS)

    Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu

    2016-01-01

    A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)

  17. High-accuracy defect sizing for CRDM penetration adapters using the ultrasonic TOFD technique

    International Nuclear Information System (INIS)

    Atkinson, I.

    1995-01-01

    Ultrasonic time-of-flight diffraction (TOFD) is the preferred technique for critical sizing of throughwall orientated defects in a wide range of components, primarily because it is intrinsically more accurate than amplitude-based techniques. For the same reason, TOFD is the preferred technique for sizing the cracks in control rod drive mechanism (CRDM) penetration adapters, which have been the subject of much recent attention. Once the considerable problem of restricted access for the UT probes has been overcome, this inspection lends itself to very high accuracy defect sizing using TOFD. In qualification trials under industrial conditions, depth sizing to an accuracy of ≤ 0.5 mm has been routinely achieved throughout the full wall thickness (16 mm) of the penetration adapters, using only a single probe pair and without recourse to signal processing. (author)

  18. High accuracy of family history of melanoma in Danish melanoma cases

    DEFF Research Database (Denmark)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-01-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...

  19. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    Science.gov (United States)

    Frey, Bradley J.; Leviton, Douglas B.

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  20. Image Processing Based Signature Verification Technique to Reduce Fraud in Financial Institutions

    Directory of Open Access Journals (Sweden)

    Hussein Walid

    2016-01-01

    Full Text Available Handwritten signature is broadly utilized as personal verification in financial institutions ensures the necessity for a robust automatic signature verification tool. This tool aims to reduce fraud in all related financial transactions’ sectors. This paper proposes an online, robust, and automatic signature verification technique using the recent advances in image processing and machine learning. Once the image of a handwritten signature for a customer is captured, several pre-processing steps are performed on it including filtration and detection of the signature edges. Afterwards, a feature extraction process is applied on the image to extract Speeded up Robust Features (SURF and Scale-Invariant Feature Transform (SIFT features. Finally, a verification process is developed and applied to compare the extracted image features with those stored in the database for the specified customer. Results indicate high accuracy, simplicity, and rapidity of the developed technique, which are the main criteria to judge a signature verification tool in banking and other financial institutions.

  1. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  2. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  3. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery.

    Science.gov (United States)

    Yu, Victoria Y; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A; Sheng, Ke

    2015-11-01

    Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup

  4. ECG Sensor Verification System with Mean-Interval Algorithm for Handling Sport Issue

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2016-01-01

    Full Text Available With the development of biometric verification, we proposed a new algorithm and personal mobile sensor card system for ECG verification. The proposed new mean-interval approach can identify the user quickly with high accuracy and consumes a small amount of flash memory in the microprocessor. The new framework of the mobile card system makes ECG verification become a feasible application to overcome the issues of a centralized database. For a fair and comprehensive evaluation, the experimental results have been tested on public MIT-BIH ECG databases and our circuit system; they confirm that the proposed scheme is able to provide excellent accuracy and low complexity. Moreover, we also proposed a multiple-state solution to handle the heat rate changes of sports problem. It should be the first to address the issue of sports in ECG verification.

  5. Verification and validation in computational fluid dynamics

    Science.gov (United States)

    Oberkampf, William L.; Trucano, Timothy G.

    2002-04-01

    Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different

  6. Recover Act. Verification of Geothermal Tracer Methods in Highly Constrained Field Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Matthew W. [California State University, Long Beach, CA (United States)

    2014-05-16

    The prediction of the geothermal system efficiency is strong linked to the character of the flow system that connects injector and producer wells. If water flow develops channels or “short circuiting” between injection and extraction wells thermal sweep is poor and much of the reservoir is left untapped. The purpose of this project was to understand how channelized flow develops in fracture geothermal reservoirs and how it can be measured in the field. We explored two methods of assessing channelization: hydraulic connectivity tests and tracer tests. These methods were tested at a field site using two verification methods: ground penetrating radar (GPR) images of saline tracer and heat transfer measurements using distributed temperature sensing (DTS). The field site for these studies was the Altona Flat Fractured Rock Research Site located in northeastern New York State. Altona Flat Rock is an experimental site considered a geologic analog for some geothermal reservoirs given its low matrix porosity. Because soil overburden is thin, it provided unique access to saturated bedrock fractures and the ability image using GPR which does not effectively penetrate most soils. Five boreholes were drilled in a “five spot” pattern covering 100 m2 and hydraulically isolated in a single bedding plane fracture. This simple system allowed a complete characterization of the fracture. Nine small diameter boreholes were drilled from the surface to just above the fracture to allow the measurement of heat transfer between the fracture and the rock matrix. The focus of the hydraulic investigation was periodic hydraulic testing. In such tests, rather than pumping or injection in a well at a constant rate, flow is varied to produce an oscillating pressure signal. This pressure signal is sensed in other wells and the attenuation and phase lag between the source and receptor is an indication of hydraulic connection. We found that these tests were much more effective than constant

  7. High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry

    Directory of Open Access Journals (Sweden)

    Guy Jean-Pierre Schumann

    2016-01-01

    Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.

  8. Accuracy of High-Resolution Ultrasonography in the Detection of Extensor Tendon Lacerations.

    Science.gov (United States)

    Dezfuli, Bobby; Taljanovic, Mihra S; Melville, David M; Krupinski, Elizabeth A; Sheppard, Joseph E

    2016-02-01

    Lacerations to the extensor mechanism are usually diagnosed clinically. Ultrasound (US) has been a growing diagnostic tool for tendon injuries since the 1990s. To date, there has been no publication establishing the accuracy and reliability of US in the evaluation of extensor mechanism lacerations in the hand. The purpose of this study is to determine the accuracy of US to detect extensor tendon injuries in the hand. Sixteen fingers and 4 thumbs in 4 fresh-frozen and thawed cadaveric hands were used. Sixty-eight 0.5-cm transverse skin lacerations were created. Twenty-seven extensor tendons were sharply transected. The remaining skin lacerations were used as sham dissection controls. One US technologist and one fellowship-trained musculoskeletal radiologist performed real-time dynamic US studies in and out of water bath. A second fellowship trained musculoskeletal radiologist subsequently reviewed the static US images. Dynamic and static US interpretation accuracy was assessed using dissection as "truth." All 27 extensor tendon lacerations and controls were identified correctly with dynamic imaging as either injury models that had a transected extensor tendon or sham controls with intact extensor tendons (sensitivity = 100%, specificity = 100%, positive predictive value = 1.0; all significantly greater than chance). Static imaging had a sensitivity of 85%, specificity of 89%, and accuracy of 88% (all significantly greater than chance). The results of the dynamic real time versus static US imaging were clearly different but did not reach statistical significance. Diagnostic US is a very accurate noninvasive study that can identify extensor mechanism injuries. Clinically suspected cases of acute extensor tendon injury scanned by high-frequency US can aid and/or confirm the diagnosis, with dynamic imaging providing added value compared to static. Ultrasonography, to aid in the diagnosis of extensor mechanism lacerations, can be successfully used in a reliable and

  9. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    Science.gov (United States)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  10. Technical accuracy of a neuronavigation system measured with a high-precision mechanical micromanipulator.

    Science.gov (United States)

    Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R

    1997-12-01

    This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.

  11. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    International Nuclear Information System (INIS)

    Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A

    2013-01-01

    Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now

  12. Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-08-03

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.

  13. Ultra-high accuracy optical testing: creating diffraction-limited short-wavelength optical systems

    International Nuclear Information System (INIS)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman, Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli, Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-01-01

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-(angstrom) and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date

  14. Modality Switching in a Property Verification Task: An ERP Study of What Happens When Candles Flicker after High Heels Click.

    Science.gov (United States)

    Collins, Jennifer; Pecher, Diane; Zeelenberg, René; Coulson, Seana

    2011-01-01

    The perceptual modalities associated with property words, such as flicker or click, have previously been demonstrated to affect subsequent property verification judgments (Pecher et al., 2003). Known as the conceptual modality switch effect, this finding supports the claim that brain systems for perception and action help subserve the representation of concepts. The present study addressed the cognitive and neural substrate of this effect by recording event-related potentials (ERPs) as participants performed a property verification task with visual or auditory properties in key trials. We found that for visual property verifications, modality switching was associated with an increased amplitude N400. For auditory verifications, switching led to a larger late positive complex. Observed ERP effects of modality switching suggest property words access perceptual brain systems. Moreover, the timing and pattern of the effects suggest perceptual systems impact the decision-making stage in the verification of auditory properties, and the semantic stage in the verification of visual properties.

  15. The use of high accuracy NAA for the certification of NIST botanical standard reference materials

    International Nuclear Information System (INIS)

    Becker, D.A.; Greenberg, R.R.; Stone, S.F.

    1992-01-01

    Neutron activation analysis is one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). NAA competes favorably with all other techniques because of it's unique capabilities for high accuracy even at very low concentrations for many elements. In this paper, instrumental and radiochemical NAA results are described for 25 elements in two new NIST SRMs, SRM 1515 (Apple Leaves) and SRM 1547 (Peach Leaves), and are compared to the certified values for 19 elements in these two new botanical reference materials. (author) 7 refs.; 4 tabs

  16. High-accuracy critical exponents for O(N) hierarchical 3D sigma models

    International Nuclear Information System (INIS)

    Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.

    2006-01-01

    The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits

  17. High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer

    CERN Multimedia

    2002-01-01

    The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \

  18. A variational nodal diffusion method of high accuracy; Varijaciona nodalna difuziona metoda visoke tachnosti

    Energy Technology Data Exchange (ETDEWEB)

    Tomasevic, Dj; Altiparmarkov, D [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1988-07-01

    A variational nodal diffusion method with accurate treatment of transverse leakage shape is developed and presented in this paper. Using Legendre expansion in transverse coordinates higher order quasi-one-dimensional nodal equations are formulated. Numerical solution has been carried out using analytical solutions in alternating directions assuming Legendre expansion of the RHS term. The method has been tested against 2D and 3D IAEA benchmark problem, as well as 2D CANDU benchmark problem. The results are highly accurate. The first order approximation yields to the same order of accuracy as the standard nodal methods with quadratic leakage approximation, while the second order reaches reference solution. (author)

  19. A new ultra-high-accuracy angle generator: current status and future direction

    Science.gov (United States)

    Guertin, Christian F.; Geckeler, Ralf D.

    2017-09-01

    Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.

  20. Sensor-fusion-based biometric identity verification

    International Nuclear Information System (INIS)

    Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.; Martinez, R.F.; Bartholomew, J.W.; Jordan, J.B.; Flachs, G.M.; Bao, Z.; Zhu, L.

    1998-02-01

    Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person's identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed for discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm

  1. High Accuracy, Miniature Pressure Sensor for Very High Temperatures, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SiWave proposes to develop a compact, low-cost MEMS-based pressure sensor for very high temperatures and low pressures in hypersonic wind tunnels. Most currently...

  2. Verification and validation guidelines for high integrity systems: Appendices A--D, Volume 2

    International Nuclear Information System (INIS)

    Hecht, H.; Hecht, M.; Dinsmore, G.; Hecht, S.; Tang, D.

    1995-03-01

    The following material is furnished as an experimental guide for the use of risk based classification for nuclear plant protection systems. As shown in Sections 2 and 3 of this report, safety classifications for the nuclear field are application based (using the function served as the primary criterion), whereas those in use by the process industry and the military are risk based. There are obvious obstacles to the use of risk based classifications (and the associated integrity levels) for nuclear power plants, yet there are also many potential benefits, including: it considers all capabilities provided for dealing with a specific hazard, thus assigning a lower risk where multiple protection is provided (either at the same or at lower layers); this permits the plant management to perform trade-offs between systems that meet the highest qualification levels or multiple diverse systems at lower qualification levels; it motivates the use (and therefore also the development) of protection systems with demonstrated low failure probability; and it may permit lower cost process industry equipment of an established integrity level to be used in nuclear applications (subject to verification of the integrity level and regulatory approval). The totality of these benefits may reduce the cost of digital protection systems significantly an motivate utilities to much more rapid upgrading of the capabilities than is currently the case. Therefore the outline of a risk based classification is presented here, to serve as a starting point for further investigation and possible trial application

  3. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    International Nuclear Information System (INIS)

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  4. A survey on the high reliability software verification and validation technology for instrumentation and control in NPP.

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Kee Choon; Lee, Chang Soo; Dong, In Sook [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-01-01

    This document presents the technical status of the software verification and validation (V and V) efforts to support developing and licensing digital instrumentation and control (I and C) systems in nuclear power plants. We have reviewed codes and standards to be concensus criteria among vendor, licensee and licenser. Then we have described the software licensing procedures under 10 CFR 50 and 10 CFR 52 of the United States cope with the licensing barrier. At last, we have surveyed the technical issues related to developing and licensing the high integrity software for digital I and C systems. These technical issues let us know the development direction of our own software V and V methodology. (Author) 13 refs., 2 figs.,.

  5. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    Science.gov (United States)

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  6. Real-time monitoring and verification of in vivo high dose rate brachytherapy using a pinhole camera

    International Nuclear Information System (INIS)

    Duan, Jun; Macey, Daniel J.; Pareek, Prem N.; Brezovich, Ivan A.

    2001-01-01

    We investigated a pinhole imaging system for independent in vivo monitoring and verification of high dose rate (HDR) brachytherapy treatment. The system consists of a high-resolution pinhole collimator, an x-ray fluoroscope, and a standard radiographic screen-film combination. Autofluoroscopy provides real-time images of the in vivo Ir-192 HDR source for monitoring the source location and movement, whereas autoradiography generates a permanent record of source positions on film. Dual-pinhole autoradiographs render stereo-shifted source images that can be used to reconstruct the source dwell positions in three dimensions. The dynamic range and spatial resolution of the system were studied with a polystyrene phantom using a range of source strengths and dwell times. For the range of source activity used in HDR brachytherapy, a 0.5 mm diameter pinhole produced sharp fluoroscopic images of the source within the dynamic range of the fluoroscope. With a source-to-film distance of 35 cm and a 400 speed screen-film combination, the same pinhole yielded well recognizable images of a 281.2 GBq (7.60 Ci) Ir-192 source for dwell times in the typical clinical range of 2 to 400 s. This 0.5 mm diameter pinhole could clearly resolve source positions separated by lateral displacements as small as 1 mm. Using a simple reconstruction algorithm, dwell positions in a phantom were derived from stereo-shifted dual-pinhole images and compared to the known positions. The agreement was better than 1 mm. A preliminary study of a patient undergoing HDR treatment for cervical cancer suggests that the imaging method is clinically feasible. Based on these studies we believe that the pinhole imaging method is capable of providing independent and reliable real-time monitoring and verification for HDR brachytherapy

  7. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope

    Energy Technology Data Exchange (ETDEWEB)

    Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp [Department of Radiation Oncology, Nippon Medical School Tamanagayama Hospital, Tama (Japan); Chatani, Masashi [Department of Radiation Oncology, Osaka Rosai Hospital, Sakai (Japan); Otani, Yuki [Department of Radiology, Kaizuka City Hospital, Kaizuka (Japan); Teshima, Teruki [Department of Radiation Oncology, Osaka Medical Center for Cancer and Cardiovascular Diseases, Osaka (Japan); Kumita, Shinichirou [Department of Radiology, Nippon Medical School Hospital, Tokyo (Japan)

    2017-03-15

    Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.

  8. SNP Data Quality Control in a National Beef and Dairy Cattle System and Highly Accurate SNP Based Parentage Verification and Identification

    Directory of Open Access Journals (Sweden)

    Matthew C. McClure

    2018-03-01

    Full Text Available A major use of genetic data is parentage verification and identification as inaccurate pedigrees negatively affect genetic gain. Since 2012 the international standard for single nucleotide polymorphism (SNP verification in Bos taurus cattle has been the ISAG SNP panels. While these ISAG panels provide an increased level of parentage accuracy over microsatellite markers (MS, they can validate the wrong parent at ≤1% misconcordance rate levels, indicating that more SNP are needed if a more accurate pedigree is required. With rapidly increasing numbers of cattle being genotyped in Ireland that represent 61 B. taurus breeds from a wide range of farm types: beef/dairy, AI/pedigree/commercial, purebred/crossbred, and large to small herd size the Irish Cattle Breeding Federation (ICBF analyzed different SNP densities to determine that at a minimum ≥500 SNP are needed to consistently predict only one set of parents at a ≤1% misconcordance rate. For parentage validation and prediction ICBF uses 800 SNP (ICBF800 selected based on SNP clustering quality, ISAG200 inclusion, call rate (CR, and minor allele frequency (MAF in the Irish cattle population. Large datasets require sample and SNP quality control (QC. Most publications only deal with SNP QC via CR, MAF, parent-progeny conflicts, and Hardy-Weinberg deviation, but not sample QC. We report here parentage, SNP QC, and a genomic sample QC pipelines to deal with the unique challenges of >1 million genotypes from a national herd such as SNP genotype errors from mis-tagging of animals, lab errors, farm errors, and multiple other issues that can arise. We divide the pipeline into two parts: a Genotype QC and an Animal QC pipeline. The Genotype QC identifies samples with low call rate, missing or mixed genotype classes (no BB genotype or ABTG alleles present, and low genotype frequencies. The Animal QC handles situations where the genotype might not belong to the listed individual by identifying: >1 non

  9. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  10. A high accuracy algorithm of displacement measurement for a micro-positioning stage

    Directory of Open Access Journals (Sweden)

    Xiang Zhang

    2017-05-01

    Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.

  11. Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.

    Science.gov (United States)

    Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua

    2011-05-15

    High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.

  12. High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li

    CERN Multimedia

    Le scornet, G

    2002-01-01

    The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.

  13. Computer modeling of oil spill trajectories with a high accuracy method

    International Nuclear Information System (INIS)

    Garcia-Martinez, Reinaldo; Flores-Tovar, Henry

    1999-01-01

    This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)

  14. Treatment accuracy of hypofractionated spine and other highly conformal IMRT treatments

    International Nuclear Information System (INIS)

    Sutherland, B.; Hanlon, P.; Charles, P.

    2011-01-01

    Full text: Spinal cord metastases pose difficult challenges for radiation treatment due to tight dose constraints and a concave PTY. This project aimed to thoroughly test the treatment accuracy of the Eclipse Treatment Planning System (TPS) for highly modulated IMRT treatments, in particular of the thoracic spine, using an Elekta Synergy Linear Accelerator. The increased understanding obtained through different quality assurance techniques allowed recommendations to be made for treatment site commissioning with improved accuracy at the Princess Alexandra Hospital (PAH). Three thoracic spine IMRT plans at the PAH were used for data collection. Complex phantom models were built using CT data, and fields simulated using Monte Carlo modelling. The simulated dose distributions were compared with the TPS using gamma analysis and DYH comparison. High resolution QA was done for all fields using the MatriXX ion chamber array, MapCHECK2 diode array shifted, and the EPlD to determine a procedure for commissioning new treatment sites. Basic spine simulations found the TPS overestimated absorbed dose to bone, however within spinal cord there was good agreement. High resolution QA found the average gamma pass rate of the fields to be 99.1 % for MatriXX, 96.5% for MapCHECK2 shifted and 97.7% for EPlD. Preliminary results indicate agreement between the TPS and delivered dose distributions higher than previously believed for the investigated IMRT plans. The poor resolution of the MatriXX, and normalisation issues with MapCHECK2 leads to probable recommendation of EPlD for future IMRT commissioning due to the high resolution and minimal setup required.

  15. High Accuracy, High Energy He-Erd Analysis of H,C, and T

    International Nuclear Information System (INIS)

    Browning, James F.; Langley, Robert A.; Doyle, Barney L.; Banks, James C.; Wampler, William R.

    1999-01-01

    A new analysis technique using high-energy helium ions for the simultaneous elastic recoil detection of all three hydrogen isotopes in metal hydride systems extending to depths of several microm's is presented. Analysis shows that it is possible to separate each hydrogen isotope in a heavy matrix such as erbium to depths of 5 microm using incident 11.48MeV 4 He 2 ions with a detection system composed of a range foil and ΔE-E telescope detector. Newly measured cross sections for the elastic recoil scattering of 4 He 2 ions from protons and deuterons are presented in the energy range 10 to 11.75 MeV for the laboratory recoil angle of 30degree

  16. PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components

    CERN Document Server

    Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia

    2016-01-01

    The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...

  17. High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be

    CERN Multimedia

    2002-01-01

    State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...

  18. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    Science.gov (United States)

    Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana

    2015-01-01

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.

  19. High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A

    International Nuclear Information System (INIS)

    J. Denard; A. Saha; G. Lavessiere

    2001-01-01

    CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described

  20. Medication adherence assessment: high accuracy of the new Ingestible Sensor System in kidney transplants.

    Science.gov (United States)

    Eisenberger, Ute; Wüthrich, Rudolf P; Bock, Andreas; Ambühl, Patrice; Steiger, Jürg; Intondi, Allison; Kuranoff, Susan; Maier, Thomas; Green, Damian; DiCarlo, Lorenzo; Feutren, Gilles; De Geest, Sabina

    2013-08-15

    This open-label single-arm exploratory study evaluated the accuracy of the Ingestible Sensor System (ISS), a novel technology for directly assessing the ingestion of oral medications and treatment adherence. ISS consists of an ingestible event marker (IEM), a microsensor that becomes activated in gastric fluid, and an adhesive personal monitor (APM) that detects IEM activation. In this study, the IEM was combined to enteric-coated mycophenolate sodium (ECMPS). Twenty stable adult kidney transplants received IEM-ECMPS for a mean of 9.2 weeks totaling 1227 cumulative days. Eight patients prematurely discontinued treatment due to ECMPS gastrointestinal symptoms (n=2), skin intolerance to APM (n=2), and insufficient system usability (n=4). Rash or erythema due to APM was reported in 7 (37%) patients, all during the first month of use. No serious or severe adverse events and no rejection episode were reported. IEM detection accuracy was 100% over 34 directly observed ingestions; Taking Adherence was 99.4% over a total of 2824 prescribed IEM-ECMPS ingestions. ISS could detect accurately the ingestion of two IEM-ECMPS capsules taken at the same time (detection rate of 99.3%, n=2376). ISS is a promising new technology that provides highly reliable measurements of intake and timing of intake of drugs that are combined with the IEM.

  1. Combined Scintigraphy and Tumor Marker Analysis Predicts Unfavorable Histopathology of Neuroblastic Tumors with High Accuracy.

    Directory of Open Access Journals (Sweden)

    Wolfgang Peter Fendler

    Full Text Available Our aim was to improve the prediction of unfavorable histopathology (UH in neuroblastic tumors through combined imaging and biochemical parameters.123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR, MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction.34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC. TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001. Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001. Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001.Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH.

  2. Enhancing the Accuracy of Advanced High Temperature Mechanical Testing through Thermography

    Directory of Open Access Journals (Sweden)

    Jonathan Jones

    2018-03-01

    Full Text Available This paper describes the advantages and enhanced accuracy thermography provides to high temperature mechanical testing. This technique is not only used to monitor, but also to control test specimen temperatures where the infra-red technique enables accurate non-invasive control of rapid thermal cycling for non-metallic materials. Isothermal and dynamic waveforms are employed over a 200–800 °C temperature range to pre-oxidised and coated specimens to assess the capability of the technique. This application shows thermography to be accurate to within ±2 °C of thermocouples, a standardised measurement technique. This work demonstrates the superior visibility of test temperatures previously unobtainable by conventional thermocouples or even more modern pyrometers that thermography can deliver. As a result, the speed and accuracy of thermal profiling, thermal gradient measurements and cold/hot spot identification using the technique has increased significantly to the point where temperature can now be controlled by averaging over a specified area. The increased visibility of specimen temperatures has revealed additional unknown effects such as thermocouple shadowing, preferential crack tip heating within an induction coil, and, fundamental response time of individual measurement techniques which are investigated further.

  3. An output amplitude configurable wideband automatic gain control with high gain step accuracy

    International Nuclear Information System (INIS)

    He Xiaofeng; Ye Tianchun; Mo Taishan; Ma Chengyan

    2012-01-01

    An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented. The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs. And what's more, the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy. A zero, which is composed by the source feedback resistance and the source capacity, is introduced to compensate for the pole. The AGC is fabricated in a 0.18 μm CMOS process. The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB. The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA, and the die area is 800 × 300 μm 2 . (semiconductor integrated circuits)

  4. IMOM Field Test Study and Accuracy Verification

    National Research Council Canada - National Science Library

    Levien, Fred

    1998-01-01

    .... It was desired to obtain flight test data for both TAMPS and IMOM in order to compare their ability to accurately predict the effects of Radar Terrain Masking (RTM). It was initially planned to have NPS compare predictive data from both of these systems and then do analysis of how they compared to actual field test data.

  5. Bibliography for Verification and Validation in Computational Simulation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1998-01-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering

  6. Bibliography for Verification and Validation in Computational Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.

    1998-10-01

    A bibliography has been compiled dealing with the verification and validation of computational simulations. The references listed in this bibliography are concentrated in the field of computational fluid dynamics (CFD). However, references from the following fields are also included: operations research, heat transfer, solid dynamics, software quality assurance, software accreditation, military systems, and nuclear reactor safety. This bibliography, containing 221 references, is not meant to be comprehensive. It was compiled during the last ten years in response to the author's interest and research in the methodology for verification and validation. The emphasis in the bibliography is in the following areas: philosophy of science underpinnings, development of terminology and methodology, high accuracy solutions for CFD verification, experimental datasets for CFD validation, and the statistical quantification of model validation. This bibliography should provide a starting point for individual researchers in many fields of computational simulation in science and engineering.

  7. High accuracy of family history of melanoma in Danish melanoma cases.

    Science.gov (United States)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-12-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree relatives and found only 1 case of melanoma which was not reported in a 3 case melanoma family. Melanoma patients in Denmark report family history of melanoma in first and second degree relatives with a high level of accuracy with a true positive predictive value between 77 and 87%. In 99% of probands reporting a negative family history of melanoma in first degree relatives this information is correct. In clinical practice we recommend that melanoma diagnosis in relatives should be verified if possible, but even unverified reported melanoma cases in relatives should be included in the indication of genetic testing and assessment of melanoma risk in the family.

  8. High accuracy Primary Reference gas Mixtures for high-impact greenhouse gases

    Science.gov (United States)

    Nieuwenkamp, Gerard; Zalewska, Ewelina; Pearce-Hill, Ruth; Brewer, Paul; Resner, Kate; Mace, Tatiana; Tarhan, Tanil; Zellweger, Christophe; Mohn, Joachim

    2017-04-01

    Climate change, due to increased man-made emissions of greenhouse gases, poses one of the greatest risks to society worldwide. High-impact greenhouse gases (CO2, CH4 and N2O) and indirect drivers for global warming (e.g. CO) are measured by the global monitoring stations for greenhouse gases, operated and organized by the World Meteorological Organization (WMO). Reference gases for the calibration of analyzers have to meet very challenging low level of measurement uncertainty to comply with the Data Quality Objectives (DQOs) set by the WMO. Within the framework of the European Metrology Research Programme (EMRP), a project to improve the metrology for high-impact greenhouse gases was granted (HIGHGAS, June 2014-May 2017). As a result of the HIGHGAS project, primary reference gas mixtures in cylinders for ambient levels of CO2, CH4, N2O and CO in air have been prepared with unprecedented low uncertainties, typically 3-10 times lower than usually previously achieved by the NMIs. To accomplish these low uncertainties in the reference standards, a number of preparation and analysis steps have been studied and improved. The purity analysis of the parent gases had to be performed with lower detection limits than previously achievable. E.g., to achieve an uncertainty of 2•10-9 mol/mol (absolute) on the amount fraction for N2O, the detection limit for the N2O analysis in the parent gases has to be in the sub nmol/mol domain. Results of an OPO-CRDS analyzer set-up in the 5µm wavelength domain, with a 200•10-12 mol/mol detection limit for N2O, will be presented. The adsorption effects of greenhouse gas components at cylinder surfaces are critical, and have been studied for different cylinder passivation techniques. Results of a two-year stability study will be presented. The fit-for-purpose of the reference materials was studied for possible variation on isotopic composition between the reference material and the sample. Measurement results for a suit of CO2 in air

  9. Accuracy optimization of high-speed AFM measurements using Design of Experiments

    DEFF Research Database (Denmark)

    Tosello, Guido; Marinello, F.; Hansen, Hans Nørgaard

    2010-01-01

    Atomic Force Microscopy (AFM) is being increasingly employed in industrial micro/nano manufacturing applications and integrated into production lines. In order to achieve reliable process and product control at high measuring speed, instrument optimization is needed. Quantitative AFM measurement...... results are influenced by a number of scan settings parameters, defining topography sampling and measurement time: resolution (number of profiles and points per profile), scan range and direction, scanning force and speed. Such parameters are influencing lateral and vertical accuracy and, eventually......, the estimated dimensions of measured features. The definition of scan settings is based on a comprehensive optimization that targets maximization of information from collected data and minimization of measurement uncertainty and scan time. The Design of Experiments (DOE) technique is proposed and applied...

  10. Recent high-accuracy measurements of the 1S0 neutron-neutron scattering length

    International Nuclear Information System (INIS)

    Howell, C.R.; Chen, Q.; Gonzalez Trotter, D.E.; Salinas, F.; Crowell, A.S.; Roper, C.D.; Tornow, W.; Walter, R.L.; Carman, T.S.; Hussein, A.; Gibbs, W.R.; Gibson, B.F.; Morris, C.; Obst, A.; Sterbenz, S.; Whitton, M.; Mertens, G.; Moore, C.F.; Whiteley, C.R.; Pasyuk, E.; Slaus, I.; Tang, H.; Zhou, Z.; Gloeckle, W.; Witala, H.

    2000-01-01

    This paper reports two recent high-accuracy determinations of the 1 S 0 neutron-neutron scattering length, a nn . One was done at the Los Alamos National Laboratory using the π - d capture reaction to produce two neutrons with low relative momentum. The neutron-deuteron (nd) breakup reaction was used in other measurement, which was conducted at the Triangle Universities Nuclear Laboratory. The results from the two determinations were consistent with each other and with previous values obtained using the π - d capture reaction. The value obtained from the nd breakup measurements is a nn = -18.7 ± 0.1 (statistical) ± 0.6 (systematic) fm, and the value from the π - d capture experiment is a nn = -18.50 ± 0.05 ± 0.53 fm. The recommended value is a nn = -18.5 ± 0.3 fm. (author)

  11. High accuracy amplitude and phase measurements based on a double heterodyne architecture

    International Nuclear Information System (INIS)

    Zhao Danyang; Wang Guangwei; Pan Weimin

    2015-01-01

    In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)

  12. High-accuracy biodistribution analysis of adeno-associated virus variants by double barcode sequencing.

    Science.gov (United States)

    Marsic, Damien; Méndez-Gómez, Héctor R; Zolotukhin, Sergei

    2015-01-01

    Biodistribution analysis is a key step in the evaluation of adeno-associated virus (AAV) capsid variants, whether natural isolates or produced by rational design or directed evolution. Indeed, when screening candidate vectors, accurate knowledge about which tissues are infected and how efficiently is essential. We describe the design, validation, and application of a new vector, pTR-UF50-BC, encoding a bioluminescent protein, a fluorescent protein and a DNA barcode, which can be used to visualize localization of transduction at the organism, organ, tissue, or cellular levels. In addition, by linking capsid variants to different barcoded versions of the vector and amplifying the barcode region from various tissue samples using barcoded primers, biodistribution of viral genomes can be analyzed with high accuracy and efficiency.

  13. Accuracy and high-speed technique for autoprocessing of Young's fringes

    Science.gov (United States)

    Chen, Wenyi; Tan, Yushan

    1991-12-01

    In this paper, an accurate and high-speed method for auto-processing of Young's fringes is proposed. A group of 1-D sampled intensity values along three or more different directions are taken from Young's fringes, and the fringe spacings of each direction are obtained by 1-D FFT respectively. Two directions that have smaller fringe spacing are selected from all directions. The accurate fringe spacings along these two directions are obtained by using orthogonal coherent phase detection technique (OCPD). The actual spacing and angle of Young's fringes, therefore, can be calculated. In this paper, the principle of OCPD is introduced in detail. The accuracy of the method is evaluated theoretically and experimentally.

  14. DEVELOPMENT OF COMPLEXITY, ACCURACY, AND FLUENCY IN HIGH SCHOOL STUDENTS’ WRITTEN FOREIGN LANGUAGE PRODUCTION

    Directory of Open Access Journals (Sweden)

    Bouchaib Benzehaf

    2016-11-01

    Full Text Available The present study aims to longitudinally depict the dynamic and interactive development of Complexity, Accuracy, and Fluency (CAF in multilingual learners’ L2 and L3 writing. The data sources include free writing tasks written in L2 French and L3 English by 45 high school participants over a period of four semesters. CAF dimensions are measured using a variation of Hunt’s T-units (1964. Analysis of the quantitative data obtained suggests that CAF measures develop differently for learners’ L2 French and L3 English. They increase more persistently in L3 English, and they display the characteristics of a dynamic, non-linear system characterized by ups and downs particularly in L2 French. In light of the results, we suggest more and denser longitudinal data to explore the nature of interactions between these dimensions in foreign language development, particularly at the individual level.

  15. Accuracy of thick-walled hollows during piercing on three-high mill

    International Nuclear Information System (INIS)

    Potapov, I.N.; Romantsev, B.A.; Shamanaev, V.I.; Popov, V.A.; Kharitonov, E.A.

    1975-01-01

    The results of investigations are presented concerning the accuracy of geometrical dimensions of thick-walled sleeves produced by piercing on a 100-ton trio screw rolling mill MISiS with three schemes of fixing and centering the rod. The use of a spherical thrust journal for the rod and of a long centering bushing makes it possible to diminish the non-uniformity of the wall thickness of the sleeves by 30-50%. It is established that thick-walled sleeves with accurate geometrical dimensions (nonuniformity of the wall thickness being less than 10%) can be produced if the system sleeve - mandrel - rod is highly rigid and the rod has a two- or three-fold stability margin over the length equal to that of the sleeve being pierced. The process of piercing is expedient to be carried out with increased angles of feed (14-16 deg). Blanks have been made from steel 12Kh1MF

  16. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  17. Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems

    Science.gov (United States)

    Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.

    2015-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  18. Innovative Technique for High-Accuracy Remote Monitoring of Surface Water

    Science.gov (United States)

    Gisler, A.; Barton-Grimley, R. A.; Thayer, J. P.; Crowley, G.

    2016-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems and agricultural waterways. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for monitoring water resources on fast timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  19. High-accuracy continuous airborne measurements of greenhouse gases (CO2 and CH4) during BARCA

    Science.gov (United States)

    Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.

    2009-12-01

    High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR) analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.

  20. High-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects.

    Science.gov (United States)

    Zeng, Zhaoli; Qu, Xueming; Tan, Yidong; Tan, Runtao; Zhang, Shulian

    2015-06-29

    A simple and high-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects is presented. The single high-order feedback effect is realized when dual-frequency laser reflects numerous times in a Fabry-Perot cavity and then goes back to the laser resonator along the same route. In this case, two orthogonally polarized feedback fringes with nanoscale resolution are obtained. This self-mixing interferometer has the advantages of higher sensitivity to weak signal than that of conventional interferometer. In addition, two orthogonally polarized fringes are useful for discriminating the moving direction of measured object. The experiment of measuring 2.5nm step is conducted, which shows a great potential in nanometrology.

  1. Establishment and verification of dose-response curve of chromosomal aberrations after exposure to very high dose γ-ray

    International Nuclear Information System (INIS)

    Chen Ying; Luo Yisheng; Cao Zhenshan; Liu Xiulin

    2006-01-01

    To estimate accurately biological dose of the victims exposed to high dose, the dose-response curves of chromosome aberration induced by 6-22 Gy 60 Co γ-ray were established. Human peripheral blood in vitro was irradiated, then lymphocytes were concentrated, cultured 52h, 68h and 72h and harvested. The frequencies of dicentrics (multi-centrics) and rings were counted and compared between different culture times. The dose-response curves and equations were established, as well as verified with high dose exposure accidents. The experiment showed that the culture time should be prolonged properly after high dose exposure, and no significant differences were observed between 52-72h culture. The dose-response curve of 6-22 Gy fitted to linear-square model Y=-2.269 + 0.776D - 7.868 x 10 -3 D 2 and is reliable through verification of the accident dose estimations. In this study, the dose-response curve and equation of chromosome dic + r after 6-22 Gy high dose irradiation were established firstly, and exact dose estimation can be achieved according to it. (authors)

  2. Procedure generation and verification

    International Nuclear Information System (INIS)

    Sheely, W.F.

    1986-01-01

    The Department of Energy has used Artificial Intelligence of ''AI'' concepts to develop two powerful new computer-based techniques to enhance safety in nuclear applications. The Procedure Generation System, and the Procedure Verification System, can be adapted to other commercial applications, such as a manufacturing plant. The Procedure Generation System can create a procedure to deal with the off-normal condition. The operator can then take correct actions on the system in minimal time. The Verification System evaluates the logic of the Procedure Generator's conclusions. This evaluation uses logic techniques totally independent of the Procedure Generator. The rapid, accurate generation and verification of corrective procedures can greatly reduce the human error, possible in a complex (stressful/high stress) situation

  3. Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method

    Science.gov (United States)

    Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio

    Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV

  4. High-Accuracy Measurements of Total Column Water Vapor From the Orbiting Carbon Observatory-2

    Science.gov (United States)

    Nelson, Robert R.; Crisp, David; Ott, Lesley E.; O'Dell, Christopher W.

    2016-01-01

    Accurate knowledge of the distribution of water vapor in Earth's atmosphere is of critical importance to both weather and climate studies. Here we report on measurements of total column water vapor (TCWV) from hyperspectral observations of near-infrared reflected sunlight over land and ocean surfaces from the Orbiting Carbon Observatory-2 (OCO-2). These measurements are an ancillary product of the retrieval algorithm used to measure atmospheric carbon dioxide concentrations, with information coming from three highly resolved spectral bands. Comparisons to high-accuracy validation data, including ground-based GPS and microwave radiometer data, demonstrate that OCO-2 TCWV measurements have maximum root-mean-square deviations of 0.9-1.3mm. Our results indicate that OCO-2 is the first space-based sensor to accurately and precisely measure the two most important greenhouse gases, water vapor and carbon dioxide, at high spatial resolution [1.3 x 2.3 km(exp. 2)] and that OCO-2 TCWV measurements may be useful in improving numerical weather predictions and reanalysis products.

  5. A new device for liver cancer biomarker detection with high accuracy

    Directory of Open Access Journals (Sweden)

    Shuaipeng Wang

    2015-06-01

    Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.

  6. Design and Performance Evaluation of Real-time Endovascular Interventional Surgical Robotic System with High Accuracy.

    Science.gov (United States)

    Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan

    2018-05-15

    Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.

  7. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.

    Science.gov (United States)

    Qi, Jun; Liu, Guo-Ping

    2017-11-06

    This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.

  8. High-accuracy optical extensometer based on coordinate transform in two-dimensional digital image correlation

    Science.gov (United States)

    Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan

    2018-01-01

    In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.

  9. A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Jun Qi

    2017-11-01

    Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.

  10. High accuracy electromagnetic field solvers for cylindrical waveguides and axisymmetric structures using the finite element method

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1993-12-01

    Some two-dimensional finite element electromagnetic field solvers are described and tested. For TE and TM modes in homogeneous cylindrical waveguides and monopole modes in homogeneous axisymmetric structures, the solvers find approximate solutions to a weak formulation of the wave equation. Second-order isoparametric lagrangian triangular elements represent the field. For multipole modes in axisymmetric structures, the solver finds approximate solutions to a weak form of the curl-curl formulation of Maxwell's equations. Second-order triangular edge elements represent the radial (ρ) and axial (z) components of the field, while a second-order lagrangian basis represents the azimuthal (φ) component of the field weighted by the radius ρ. A reduced set of basis functions is employed for elements touching the axis. With this basis the spurious modes of the curl-curl formulation have zero frequency, so spurious modes are easily distinguished from non-static physical modes. Tests on an annular ring, a pillbox and a sphere indicate the solutions converge rapidly as the mesh is refined. Computed eigenvalues with relative errors of less than a few parts per million are obtained. Boundary conditions for symmetric, periodic and symmetric-periodic structures are discussed and included in the field solver. Boundary conditions for structures with inversion symmetry are also discussed. Special corner elements are described and employed to improve the accuracy of cylindrical waveguide and monopole modes with singular fields at sharp corners. The field solver is applied to three problems: (1) cross-field amplifier slow-wave circuits, (2) a detuned disk-loaded waveguide linear accelerator structure and (3) a 90 degrees overmoded waveguide bend. The detuned accelerator structure is a critical application of this high accuracy field solver. To maintain low long-range wakefields, tight design and manufacturing tolerances are required

  11. Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas

    Science.gov (United States)

    Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi

    2016-06-01

    The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  12. MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  13. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    Science.gov (United States)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  14. High-resolution CT of nontuberculous mycobacterium infection in adult CF patients: diagnostic accuracy

    International Nuclear Information System (INIS)

    McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D.; DeJong, Pim A.; Loeve, Martine; Tiddens, Harm A.W.M.; McKone, Edward; Gallagher, Charles G.

    2012-01-01

    To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R 2 = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)

  15. High-resolution CT of nontuberculous mycobacterium infection in adult CF patients: diagnostic accuracy

    Energy Technology Data Exchange (ETDEWEB)

    McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D. [St. Vincent' s University Hospital, Department of Radiology, Dublin (Ireland); DeJong, Pim A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Loeve, Martine; Tiddens, Harm A.W.M. [Erasmus MC-Sophia Children' s Hospital, Department of Radiology, Department of Pediatric Pulmonology and Allergology, Rotterdam (Netherlands); McKone, Edward; Gallagher, Charles G. [St. Vincent' s University Hospital, Department of Respiratory Medicine and National Referral Centre for Adult Cystic Fibrosis, Dublin (Ireland)

    2012-12-15

    To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R{sup 2} = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)

  16. Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.

    Directory of Open Access Journals (Sweden)

    Sophie Marchal

    Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.

  17. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  18. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    Science.gov (United States)

    Shafer, Jaclyn A.; Watson, Leela R.

    2015-01-01

    Customer: NASA's Launch Services Program (LSP), Ground Systems Development and Operations (GSDO), and Space Launch System (SLS) programs. NASA's LSP, GSDO, SLS and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). For example, to determine if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 kilometer Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the AMU high-resolution WRF Environmental Modeling System (EMS) model (Watson 2013) in real-time. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The model was set up with a triple-nested grid configuration over KSC/CCAFS based on previous AMU work (Watson 2013). The outer domain (D01) has 12-kilometer grid spacing, the middle domain (D02) has 4-kilometer grid spacing, and the inner domain (D03) has 1.33-kilometer grid spacing. The model runs a 12-hour forecast every hour, D01 and D02 domain outputs are available once an hour and D03 is every 15 minutes during the forecast period. The AMU assessed the WRF-EMS 1

  19. Verification of high-speed solar wind stream forecasts using operational solar wind models

    DEFF Research Database (Denmark)

    Reiss, Martin A.; Temmer, Manuela; Veronig, Astrid M.

    2016-01-01

    and the background solar wind conditions. We found that both solar wind models are capable of predicting the large-scale features of the observed solar wind speed (root-mean-square error, RMSE ≈100 km/s) but tend to either overestimate (ESWF) or underestimate (WSA) the number of high-speed solar wind streams (threat......High-speed solar wind streams emanating from coronal holes are frequently impinging on the Earth's magnetosphere causing recurrent, medium-level geomagnetic storm activity. Modeling high-speed solar wind streams is thus an essential element of successful space weather forecasting. Here we evaluate...... high-speed stream forecasts made by the empirical solar wind forecast (ESWF) and the semiempirical Wang-Sheeley-Arge (WSA) model based on the in situ plasma measurements from the Advanced Composition Explorer (ACE) spacecraft for the years 2011 to 2014. While the ESWF makes use of an empirical relation...

  20. Design and Verification of Digital Architecture of 65K Pixel Readout Chip for High-Energy Physics

    CERN Document Server

    Poikela, Tuomas; Paakkulainen, J

    2010-01-01

    The feasibility to design and implement a front-end ASIC for the upgrade of the VELO detector of LHCb experiment at CERN using IBM’s 130nm standard CMOS process and a standard cell library is studied in this thesis. The proposed architecture is a design to cope with high data rates and continuous data taking. The architecture is designed to operate without any external trigger to record every hit signal the ASIC receives from a sensor chip, and then to transmit the information to the next level of electronics, for example to FPGAs. This thesis focuses on design, implementation and functional verification of the digital electronics of the active pixel area. The area requirements are dictated by the geometry of pixels (55$mu$m x 55$mu$m), power requirements (20W/module) by restricted cooling capabilities of the module consisting of 10 chips and output bandwidth requirements by data rate (< 10 Gbit/s) produced by a particle flux passing through the chip. The design work was carried out using transaction...

  1. A real-time in vivo dosimetric verification method for high-dose rate intracavitary brachytherapy of nasopharyngeal carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Qi Zhenyu; Deng Xiaowu; Cao Xinping; Huang Shaomin; Lerch, Michael; Rosenfeld, Anatoly [State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou 510060 (China) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou 510060 (China); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia)

    2012-11-15

    Purpose: A real-time in vivo dosimetric verification method using metal-oxide-semiconductor field effect transistor (MOSFET) dosimeters has been developed for patient dosimetry in high-dose rate (HDR) intracavitary brachytherapy of nasopharyngeal carcinoma (NPC). Methods: The necessary calibration and correction factors for MOSFET measurements in {sup 192}Iridium source were determined in a water phantom. With the detector placed inside a custom-made nasopharyngeal applicator, the actual dose delivered to the tumor was measured in vivo and compared to the calculated values using a commercial brachytherapy planning system. Results: Five MOSFETs were independently calibrated with the HDR source, yielding calibration factors of 0.48 {+-} 0.007 cGy/mV. The maximum sensitivity variation was no more than 7% in the clinically relevant distance range of 1-5 cm from the source. A total of 70 in vivo measurements in 11 NPC patients demonstrated good agreement with the treatment planning. The mean differences between the planned and the actually delivered dose within a single treatment fraction were -0.1%{+-} 3.8% and -0.1%{+-} 3.7%, respectively, for right and left side assessments. The maximum dose deviation was less than 8.5%. Conclusions: In vivo measurement using the real-time MOSFET dosimetry system is possible to evaluate the actual dose to the tumor received by the patient during a treatment fraction and thus can offer another line of security to detect and prevent large errors.

  2. Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets

    International Nuclear Information System (INIS)

    Fathizadeh, M.

    1991-01-01

    The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degrees apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly at about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ±2x10 -4 and tracking error of ±5x10 -4 was needed. 3 refs., 5 figs

  3. High accuracy line positions of the ν 1 fundamental band of 14 N 2 16 O

    KAUST Repository

    Alsaif, Bidoor

    2018-03-08

    The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240 – 1310 cm−1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10−6 cm−1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.

  4. Coronary CT angiography using prospective ECG triggering. High diagnostic accuracy with low radiation dose

    International Nuclear Information System (INIS)

    Arnoldi, E.; Ramos-Duran, L.; Abro, J.A.; Costello, P.; Zwerner, P.L.; Schoepf, U.J.; Nikolaou, K.; Reiser, M.F.

    2010-01-01

    The purpose of this study was to evaluate the diagnostic performance of coronary CT angiography (coronary CTA) using prospective ECG triggering (PT) for the detection of significant coronary artery stenosis compared to invasive coronary angiography (ICA). A total of 20 patients underwent coronary CTA with PT using a 128-slice CT scanner (Definition trademark AS+, Siemens) and ICA. All coronary CTA studies were evaluated for significant coronary artery stenoses (≥50% luminal narrowing) by 2 observers in consensus using the AHA-15-segment model. Findings in CTA were compared to those in ICA. Coronary CTA using PT had 88% sensitivity in comparison to 100% with ICA, 95% to 88% specificity, 80% to 92% positive predictive value and 97% to 100% negative predictive value for diagnosing significant coronary artery stenosis on per segment per patient analysis, respectively. Mean effective radiation dose-equivalent of CTA was 2.6±1 mSv. Coronary CTA using PT enables non-invasive diagnosis of significant coronary artery stenosis with high diagnostic accuracy in comparison to ICA and is associated with comparably low radiation exposure. (orig.) [de

  5. High accuracy line positions of the ν1 fundamental band of 14N216O

    Science.gov (United States)

    AlSaif, Bidoor; Lamperti, Marco; Gatti, Davide; Laporta, Paolo; Fermann, Martin; Farooq, Aamir; Lyulin, Oleg; Campargue, Alain; Marangoni, Marco

    2018-05-01

    The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240-1310 cm-1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10-6 cm-1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.

  6. On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.

    Science.gov (United States)

    Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis

    2016-07-01

    To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets

    International Nuclear Information System (INIS)

    Fathizadeh, M.

    1991-01-01

    The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degree apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly to about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ± 2 x 10 -4 and tracking error of ± 5 x 10 -4 was needed

  8. Quantitative accuracy of serotonergic neurotransmission imaging with high-resolution 123I SPECT

    International Nuclear Information System (INIS)

    Kuikka, J.T.

    2004-01-01

    Aim: Serotonin transporter (SERT) imaging can be used to study the role of regional abnormalities of neurotransmitter release in various mental disorders and to study the mechanism of action of therapeutic drugs or drugs' abuse. We examine the quantitative accuracy and reproducibility that can be achieved with high-resolution SPECT of serotonergic neurotransmission. Method: Binding potential (BP) of 123 I labeled tracer specific for midbrain SERT was assessed in 20 healthy persons. The effects of scatter, attenuation, partial volume, misregistration and statistical noise were estimated using phantom and human studies. Results: Without any correction, BP was underestimated by 73%. The partial volume error was the major component in this underestimation whereas the most critical error for the reproducibility was misplacement of region of interest (ROI). Conclusion: The proper ROI registration, the use of the multiple head gamma camera with transmission based scatter correction introduce more relevant results. However, due to the small dimensions of the midbrain SERT structures and poor spatial resolution of SPECT, the improvement without the partial volume correction is not great enough to restore the estimate of BP to that of the true one. (orig.) [de

  9. High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking

    Science.gov (United States)

    Zhai, Chengxing; Shao, Michael; Saini, Navtej; Sandhu, Jagmit; Werne, Thomas; Choi, Philip; Ely, Todd A.; Jacobs, Chirstopher S.; Lazio, Joseph; Martin-Mur, Tomas J.; Owen, William M.; Preston, Robert; Turyshev, Slava; Michell, Adam; Nazli, Kutay; Cui, Isaac; Monchama, Rachel

    2018-01-01

    Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package that is part of the baseline payload for the planned Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.

  10. Optimal design of a high accuracy photoelectric auto-collimator based on position sensitive detector

    Science.gov (United States)

    Yan, Pei-pei; Yang, Yong-qing; She, Wen-ji; Liu, Kai; Jiang, Kai; Duan, Jing; Shan, Qiusha

    2018-02-01

    A kind of high accuracy Photo-electric auto-collimator based on PSD was designed. The integral structure composed of light source, optical lens group, Position Sensitive Detector (PSD) sensor, and its hardware and software processing system constituted. Telephoto objective optical type is chosen during the designing process, which effectively reduces the length, weight and volume of the optical system, as well as develops simulation-based design and analysis of the auto-collimator optical system. The technical indicators of auto-collimator presented by this paper are: measuring resolution less than 0.05″; a field of view is 2ω=0.4° × 0.4° measuring range is +/-5' error of whole range measurement is less than 0.2″. Measuring distance is 10m, which are applicable to minor-angle precise measuring environment. Aberration analysis indicates that the MTF close to the diffraction limit, the spot in the spot diagram is much smaller than the Airy disk. The total length of the telephoto lens is only 450mm by the design of the optical machine structure optimization. The autocollimator's dimension get compact obviously under the condition of the image quality is guaranteed.

  11. Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing

    International Nuclear Information System (INIS)

    Bailey, David

    2005-01-01

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the

  12. Verification of the high temperature phase by the electron pair measurement at RHIC

    International Nuclear Information System (INIS)

    Akiba, Yasuyuki

    2013-01-01

    At the high energy nuclear collisions of the RHIC accelerator, the high density parton materials are created. If the matter is the quark gluon plasma (QGP) in the high temperature phase of the QCD, thermal photons are expected to be to be radiated there. The direct photon production from the gold + gold collision reactions at RHIC has been measured by using the 'virtual photon method'. In the gold + gold collisions, very many photons are produced compared with the p + p collisions. The production of the excess direct photons approximately agrees with the theoretical prediction of the thermal photon production from the initial temperature from 300 to 600 MeV QGP. In the present explanatory text, the direct photon measurements at the RHENIX experiments of RHIC are described starting from the discovery of high density matter by RHIC. The photon measurements which give direct evidence of the high temperature state and the virtual photon measurement method are reported briefly. The measurements of the direct photons and the estimation of the initial temperature at RHIC are described in detail with illustrations. Finally, some recent results are added and the ALICE experiments of LHC are referred. (S. Funahashi)

  13. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack

    2013-09-18

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  14. Reduced Set of Virulence Genes Allows High Accuracy Prediction of Bacterial Pathogenicity in Humans

    Science.gov (United States)

    Iraola, Gregorio; Vazquez, Gustavo; Spangenberg, Lucía; Naya, Hugo

    2012-01-01

    Although there have been great advances in understanding bacterial pathogenesis, there is still a lack of integrative information about what makes a bacterium a human pathogen. The advent of high-throughput sequencing technologies has dramatically increased the amount of completed bacterial genomes, for both known human pathogenic and non-pathogenic strains; this information is now available to investigate genetic features that determine pathogenic phenotypes in bacteria. In this work we determined presence/absence patterns of different virulence-related genes among more than finished bacterial genomes from both human pathogenic and non-pathogenic strains, belonging to different taxonomic groups (i.e: Actinobacteria, Gammaproteobacteria, Firmicutes, etc.). An accuracy of 95% using a cross-fold validation scheme with in-fold feature selection is obtained when classifying human pathogens and non-pathogens. A reduced subset of highly informative genes () is presented and applied to an external validation set. The statistical model was implemented in the BacFier v1.0 software (freely available at ), that displays not only the prediction (pathogen/non-pathogen) and an associated probability for pathogenicity, but also the presence/absence vector for the analyzed genes, so it is possible to decipher the subset of virulence genes responsible for the classification on the analyzed genome. Furthermore, we discuss the biological relevance for bacterial pathogenesis of the core set of genes, corresponding to eight functional categories, all with evident and documented association with the phenotypes of interest. Also, we analyze which functional categories of virulence genes were more distinctive for pathogenicity in each taxonomic group, which seems to be a completely new kind of information and could lead to important evolutionary conclusions. PMID:22916122

  15. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.

    2013-01-01

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  16. High-throughput verification of transcriptional starting sites by Deep-RACE

    DEFF Research Database (Denmark)

    Olivarius, Signe; Plessy, Charles; Carninci, Piero

    2009-01-01

    We present a high-throughput method for investigating the transcriptional starting sites of genes of interest, which we named Deep-RACE (Deep–rapid amplification of cDNA ends). Taking advantage of the latest sequencing technology, it allows the parallel analysis of multiple genes and is free...

  17. Factors Determining the Inter-observer Variability and Diagnostic Accuracy of High-resolution Manometry for Esophageal Motility Disorders.

    Science.gov (United States)

    Kim, Ji Hyun; Kim, Sung Eun; Cho, Yu Kyung; Lim, Chul-Hyun; Park, Moo In; Hwang, Jin Won; Jang, Jae-Sik; Oh, Minkyung

    2018-01-30

    Although high-resolution manometry (HRM) has the advantage of visual intuitiveness, its diagnostic validity remains under debate. The aim of this study was to evaluate the diagnostic accuracy of HRM for esophageal motility disorders. Six staff members and 8 trainees were recruited for the study. In total, 40 patients enrolled in manometry studies at 3 institutes were selected. Captured images of 10 representative swallows and a single swallow in analyzing mode in both high-resolution pressure topography (HRPT) and conventional line tracing formats were provided with calculated metrics. Assessments of esophageal motility disorders showed fair agreement for HRPT and moderate agreement for conventional line tracing (κ = 0.40 and 0.58, respectively). With the HRPT format, the k value was higher in category A (esophagogastric junction [EGJ] relaxation abnormality) than in categories B (major body peristalsis abnormalities with intact EGJ relaxation) and C (minor body peristalsis abnormalities or normal body peristalsis with intact EGJ relaxation). The overall exact diagnostic accuracy for the HRPT format was 58.8% and rater's position was an independent factor for exact diagnostic accuracy. The diagnostic accuracy for major disorders was 63.4% with the HRPT format. The frequency of major discrepancies was higher for category B disorders than for category A disorders (38.4% vs 15.4%; P < 0.001). The interpreter's experience significantly affected the exact diagnostic accuracy of HRM for esophageal motility disorders. The diagnostic accuracy for major disorders was higher for achalasia than distal esophageal spasm and jackhammer esophagus.

  18. DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2012-07-01

    Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.

  19. Type A verification report for the high flux beam reactor stack and grounds, Brookhaven National Laboratory, Upton, New York

    International Nuclear Information System (INIS)

    Harpenau, Evan M.

    2012-01-01

    The U.S. Department of Energy (DOE) Order 458.1 requires independent verification (IV) of DOE cleanup projects (DOE 2011). The Oak Ridge Institute for Science and Education (ORISE) has been designated as the responsible organization for IV of the High Flux Beam Reactor (HFBR) Stack and Grounds area at Brookhaven National Laboratory (BNL) in Upton, New York. The IV evaluation may consist of an in-process inspection with document and data reviews (Type A Verification) or a confirmatory survey of the site (Type B Verification). DOE and ORISE determined that a Type A verification of the documents and data for the HFBR Stack and Grounds: Survey Units (SU) 6, 7, and 8 was appropriate based on the initial survey unit classification, the walkover surveys, and the final analytical results provided by the Brookhaven Science Associates (BSA). The HFBR Stack and Grounds surveys began in June 2011 and were completed in September 2011. Survey activities by BSA included gamma walkover scans and sampling of the as-left soils in accordance with the BSA Work Procedure (BNL 2010a). The Field Sampling Plan - Stack and Remaining HFBR Outside Areas (FSP) stated that gamma walk-over surveys would be conducted with a bare sodium iodide (NaI) detector, and a collimated detector would be used to check areas with elevated count rates to locate the source of the high readings (BNL 2010b). BSA used the Mult- Agency Radiation Survey and Site Investigation Manual (MARSSIM) principles for determining the classifications of each survey unit. Therefore, SUs 6 and 7 were identified as Class 1 and SU 8 was deemed Class 2 (BNL 2010b). Gamma walkover surveys of SUs 6, 7, and 8 were completed using a 2 1/2 2 NaI detector coupled to a data-logger with a global positioning system (GPS). The 100% scan surveys conducted prior to the final status survey (FSS) sampling identified two general soil areas and two isolated soil locations with elevated radioactivity. The general areas of elevated activity

  20. Towards Building Reliable, High-Accuracy Solar Irradiance Database For Arid Climates

    Science.gov (United States)

    Munawwar, S.; Ghedira, H.

    2012-12-01

    Middle East's growing interest in renewable energy has led to increased activity in solar technology development with the recent commissioning of several utility-scale solar power projects and many other commercial installations across the Arabian Peninsula. The region, lying in a virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2, is also one of the most promising candidates for solar energy deployment. However, it is not the availability of resource, but its characterization and reasonably accurate assessment that determines the application potential. Solar irradiance, magnitude and variability inclusive, is the key input in assessing the economic feasibility of a solar system. The accuracy of such data is of critical importance for realistic on-site performance estimates. This contribution aims to identify the key stages in developing a robust solar database for desert climate by focusing on the challenges that an arid environment presents to parameterization of solar irradiance attenuating factors. Adjustments are proposed based on the currently available resource assessment tools to produce high quality data for assessing bankability. Establishing and maintaining ground solar irradiance measurements is an expensive affair and fairly limited in time (recently operational) and space (fewer sites) in the Gulf region. Developers within solar technology industry, therefore, rely on solar radiation models and satellite-derived data for prompt resource assessment needs. It is imperative that such estimation tools are as accurate as possible. While purely empirical models have been widely researched and validated in the Arabian Peninsula's solar modeling history, they are known to be intrinsically site-specific. A primal step to modeling is an in-depth understanding of the region's climate, identifying the key players attenuating radiation and their appropriate characterization to determine solar irradiance. Physical approach

  1. High-G Verification of Lithium-Polymer (Li-Po) Pouch Cells

    Science.gov (United States)

    2016-05-19

    commercial markets continue their increased demand for smaller high-power density batteries. Such market factors enable projects to take advantage of the...low cost and available power sources to meet the project’s power needs. However, the market factors may also lead to product line cancellations...Approved for public release; distribution is unlimited. UNCLASSIFIED i CONTENTS Page Introduction 1 Battery Construction, Requirements and

  2. Multi-level nonlinear modeling verification scheme of RC high-rise wall buildings

    OpenAIRE

    Alwaeli, W.; Mwafy, A.; Pilakoutas, K.; Guadagnini, M.

    2017-01-01

    Earthquake-resistant reinforced concrete (RC) high-rise wall buildings are designed and detailed to respond well beyond the elastic range under the expected earthquake ground motions. However, despite their considerable section depth, in terms of analysis, RC walls are still often treated as linear elements, ignoring the effect of deformation compatibility. Due to the limited number of available comprehensive experimental studies on RC structural wall systems subjected to cycling loading, few...

  3. Verification of the calculation program for brachytherapy planning system of high dose rate (PLATO)

    International Nuclear Information System (INIS)

    Almansa, J.; Alaman, C.; Perez-Alija, J.; Herrero, C.; Real, R. del; Ososrio, J. L.

    2011-01-01

    In our treatments are performed brachytherapy high dose rate since 2007. The procedures performed include gynecological intracavitary treatment and interstitial. The treatments are performed with a source of Ir-192 activity between 5 and 10 Ci such that small variations in treatment times can cause damage to the patient. In addition the Royal Decree 1566/1998 on Quality Criteria in radiotherapy establishes the need to verify the monitor units or treatment time in radiotherapy and brachytherapy. All this justifies the existence of a redundant system for brachytherapy dose calculation that can reveal any abnormality is present.

  4. Active neutron and gamma-ray imaging of highly enriched uranium for treaty verification.

    Science.gov (United States)

    Hamel, Michael C; Polack, J Kyle; Ruch, Marc L; Marcath, Matthew J; Clarke, Shaun D; Pozzi, Sara A

    2017-08-11

    The detection and characterization of highly enriched uranium (HEU) presents a large challenge in the non-proliferation field. HEU has a low neutron emission rate and most gamma rays are low energy and easily shielded. To address this challenge, an instrument known as the dual-particle imager (DPI) was used with a portable deuterium-tritium (DT) neutron generator to detect neutrons and gamma rays from induced fission in HEU. We evaluated system response using a 13.7-kg HEU sphere in several configurations with no moderation, high-density polyethylene (HDPE) moderation, and tungsten moderation. A hollow tungsten sphere was interrogated to evaluate the response to a possible hoax item. First, localization capabilities were demonstrated by reconstructing neutron and gamma-ray images. Once localized, additional properties such as fast neutron energy spectra and time-dependent neutron count rates were attributed to the items. For the interrogated configurations containing HEU, the reconstructed neutron spectra resembled Watt spectra, which gave confidence that the interrogated items were undergoing induced fission. The time-dependent neutron count rate was also compared for each configuration and shown to be dependent on the neutron multiplication of the item. This result showed that the DPI is a viable tool for localizing and confirming fissile mass and multiplication.

  5. The research of digital circuit system for high accuracy CCD of portable Raman spectrometer

    Science.gov (United States)

    Yin, Yu; Cui, Yongsheng; Zhang, Xiuda; Yan, Huimin

    2013-08-01

    The Raman spectrum technology is widely used for it can identify various types of molecular structure and material. The portable Raman spectrometer has become a hot direction of the spectrometer development nowadays for its convenience in handheld operation and real-time detection which is superior to traditional Raman spectrometer with heavy weight and bulky size. But there is still a gap for its measurement sensitivity between portable and traditional devices. However, portable Raman Spectrometer with Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) technology can enhance the Raman signal significantly by several orders of magnitude, giving consideration in both measurement sensitivity and mobility. This paper proposed a design and implementation of driver and digital circuit for high accuracy CCD sensor, which is core part of portable spectrometer. The main target of the whole design is to reduce the dark current generation rate and increase signal sensitivity during the long integration time, and in the weak signal environment. In this case, we use back-thinned CCD image sensor from Hamamatsu Corporation with high sensitivity, low noise and large dynamic range. In order to maximize this CCD sensor's performance and minimize the whole size of the device simultaneously to achieve the project indicators, we delicately designed a peripheral circuit for the CCD sensor. The design is mainly composed with multi-voltage circuit, sequential generation circuit, driving circuit and A/D transition parts. As the most important power supply circuit, the multi-voltage circuits with 12 independent voltages are designed with reference power supply IC and set to specified voltage value by the amplifier making up the low-pass filter, which allows the user to obtain a highly stable and accurate voltage with low noise. What's more, to make our design easy to debug, CPLD is selected to generate sequential signal. The A/D converter chip consists of a correlated

  6. In-depth, high-accuracy proteomics of sea urchin tooth organic matrix

    Directory of Open Access Journals (Sweden)

    Mann Matthias

    2008-12-01

    Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly

  7. Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.

    Directory of Open Access Journals (Sweden)

    Andre F Marquand

    Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.

  8. High accuracy of arterial spin labeling perfusion imaging in differentiation of pilomyxoid from pilocytic astrocytoma

    Energy Technology Data Exchange (ETDEWEB)

    Nabavizadeh, S.A.; Assadsangabi, R.; Hajmomenian, M.; Vossough, A. [Perelman School of Medicine of the University of Pennsylvania, Department of Radiology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States); Santi, M. [Perelman School of Medicine of the University of Pennsylvania, Department of Pathology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States)

    2015-05-01

    Pilomyxoid astrocytoma (PMA) is a relatively new tumor entity which has been added to the 2007 WHO Classification of tumors of the central nervous system. The goal of this study is to utilize arterial spin labeling (ASL) perfusion imaging to differentiate PMA from pilocytic astrocytoma (PA). Pulsed ASL and conventional MRI sequences of patients with PMA and PA in the past 5 years were retrospectively evaluated. Patients with history of radiation or treatment with anti-angiogenic drugs were excluded. A total of 24 patients (9 PMA, 15 PA) were included. There were statistically significant differences between PMA and PA in mean tumor/gray matter (GM) cerebral blood flow (CBF) ratios (1.3 vs 0.4, p < 0.001) and maximum tumor/GM CBF ratio (2.3 vs 1, p < 0.001). Area under the receiver operating characteristic (ROC) curves for differentiation of PMA from PA was 0.91 using mean tumor CBF, 0.95 using mean tumor/GM CBF ratios, and 0.89 using maximum tumor/GM CBF. Using a threshold value of 0.91, the mean tumor/GM CBF ratio was able to diagnose PMA with 77 % sensitivity, 100 % specificity, and a threshold value of 0.7, provided 88 % sensitivity and 86 % specificity. There was no statistically significant difference between the two tumors in enhancement pattern (p = 0.33), internal architecture (p = 0.15), or apparent diffusion coefficient (ADC) values (p = 0.07). ASL imaging has high accuracy in differentiating PMA from PA. The result of this study may have important applications in prognostication and treatment planning especially in patients with less accessible tumors such as hypothalamic-chiasmatic gliomas. (orig.)

  9. Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.

    Directory of Open Access Journals (Sweden)

    Christopher Y Park

    Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics

  10. Development and Performance Verification of Fiber Optic Temperature Sensors in High Temperature Engine Environments

    Science.gov (United States)

    Adamovsky, Grigory; Mackey, Jeffrey R.; Kren, Lawrence A.; Floyd, Bertram M.; Elam, Kristie A.; Martinez, Martel

    2014-01-01

    A High Temperature Fiber Optic Sensor (HTFOS) has been developed at NASA Glenn Research Center for aircraft engine applications. After fabrication and preliminary in-house performance evaluation, the HTFOS was tested in an engine environment at NASA Armstrong Flight Research Center. The engine tests enabled the performance of the HTFOS in real engine environments to be evaluated along with the ability of the sensor to respond to changes in the engine's operating condition. Data were collected prior, during, and after each test in order to observe the change in temperature from ambient to each of the various test point levels. An adequate amount of data was collected and analyzed to satisfy the research team that HTFOS operates properly while the engine was running. Temperature measurements made by HTFOS while the engine was running agreed with those anticipated.

  11. Verification of the high density after contrast enhancement in the 2nd week in cerebroischemic lesion

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, T; Kanno, T; Sano, H; Katada, Kazuhiro; Futimoto, K [Fujita Gakuen Univ., Toyoake, Aichi (Japan). School of Medicine

    1978-12-01

    To determine the indication, it is necessary to make clear the relation among the Stage (time and course), the Strength, the Pathogenesis, and the Effects of the operation in these diseases (SSPE relation). In this report, we focused on the High Density of CT after the contrast enhancement in the cases of ischemic lesions (the High Density was named ''Ribbon H. D.''). Seventeen cases of Ribbon H. D. in fresh infarctions were verified concerning the time of the appearance of the H. D., the features of its location and nature, and the histological findings. The results were as follows: The Ribbon H. D. appeared in the early stage of infarctions, and had its peak density at the end of the 2nd week after the onset. The Ribbon H. D. was mostly located along the cortical line, showing a ribbon-like band. The Ribbon H. D. did not appear in the sharply demarcated coagulation necrosis in the early stage or in the defined Low Density (L. D.) in the late stage of infarctions. Although the Ribbon H. D. shows the extravasation of contrast media, it does not necessarily show the existence of the hemorrhagic infarction. Some part of the Ribbon H. D. changes to a well-defined L. D. and the rest of the part becomes relative isodensity in the late stage. This change corresponds to the change in the incomplete necrosis which is afterwards divided into a resolution with a cystic cavity and the glial replacement in the late stage. In conclusion, it is possible to understand that the Ribbon H. D. corresponds to the lesion of an incomplete necrosis, with neovascularization, in the early stage of infarctions. Therefore, in addition to the present indication of a by-pass operation (TIA, RIND), this incomplete necrosis (Ribbon H. D.), its surrounding area and just before the appearance of the Ribbon H. D. might be another indication of the operation.

  12. High accuracy subwavelength distance measurements: A variable-angle standing-wave total-internal-reflection optical microscope

    International Nuclear Information System (INIS)

    Haynie, A.; Min, T.-J.; Luan, L.; Mu, W.; Ketterson, J. B.

    2009-01-01

    We describe an extension of the total-internal-reflection microscopy technique that permits direct in-plane distance measurements with high accuracy (<10 nm) over a wide range of separations. This high position accuracy arises from the creation of a standing evanescent wave and the ability to sweep the nodal positions (intensity minima of the standing wave) in a controlled manner via both the incident angle and the relative phase of the incoming laser beams. Some control over the vertical resolution is available through the ability to scan the incoming angle and with it the evanescent penetration depth.

  13. Verification of high voltage rf capacitive sheath models with particle-in-cell simulations

    Science.gov (United States)

    Wang, Ying; Lieberman, Michael; Verboncoeur, John

    2009-10-01

    Collisionless and collisional high voltage rf capacitive sheath models were developed in the late 1980's [1]. Given the external parameters of a single-frequency capacitively coupled discharge, plasma parameters including sheath width, electron and ion temperature, plasma density, power, and ion bombarding energy can be estimated. One-dimensional electrostatic PIC codes XPDP1 [2] and OOPD1 [3] are used to investigate plasma behaviors within rf sheaths and bulk plasma. Electron-neutral collisions only are considered for collisionless sheaths, while ion-neutral collisions are taken into account for collisional sheaths. The collisionless sheath model is verified very well by PIC simulations for the rf current-driven and voltage-driven cases. Results will be reported for collisional sheaths also. [1] M. A. Lieberman, IEEE Trans. Plasma Sci. 16 (1988) 638; 17 (1989) 338 [2] J. P. Verboncoeur, M. V. Alves, V. Vahedi, and C. K. Birdsall, J. Comp. Phys. 104 (1993) 321 [3] J. P. Verboncoeur, A. B. Langdon and N. T. Gladd, Comp. Phys. Comm. 87 (1995) 199

  14. Verification of High Resolution Soil Moisture and Latent Heat in Germany

    Science.gov (United States)

    Samaniego, L. E.; Warrach-Sagi, K.; Zink, M.; Wulfmeyer, V.

    2012-12-01

    Improving our understanding of soil-land-surface-atmosphere feedbacks is fundamental to make reliable predictions of water and energy fluxes on land systems influenced by anthropogenic activities. Estimating, for instance, which would be the likely consequences of changing climatic regimes on water availability and crop yield, requires of high resolution soil moisture. Modeling it at large-scales, however, is difficult and uncertain because of the interplay between state variables and fluxes and the significant parameter uncertainty of the predicting models. At larger scales, the sub-grid variability of the variables involved and the nonlinearity of the processes complicate the modeling exercise even further because parametrization schemes might be scale dependent. Two contrasting modeling paradigms (WRF/Noah-MP and mHM) were employed to quantify the effects of model and data complexity on soil moisture and latent heat over Germany. WRF/Noah-MP was forced ERA-interim on the boundaries of the rotated CORDEX-Grid (www.meteo.unican.es/wiki/cordexwrf) with a spatial resolution of 0.11o covering Europe during the period from 1989 to 2009. Land cover and soil texture were represented in WRF/Noah-MP with 1×1~km MODIS images and a single horizon, coarse resolution European-wide soil map with 16 soil texture classes, respectively. To ease comparison, the process-based hydrological model mHM was forced with daily precipitation and temperature fields generated by WRF during the same period. The spatial resolution of mHM was fixed at 4×4~km. The multiscale parameter regionalization technique (MPR, Samaniego et al. 2010) was embedded in mHM to be able to estimate effective model parameters using hyper-resolution input data (100×100~km) obtained from Corine land cover and detailed soil texture fields for various horizons comprising 72 soil texture classes for Germany, among other physiographical variables. mHM global parameters, in contrast with those of Noah-MP, were

  15. Verification of applicability of high bundle flush for an actual steam generator

    International Nuclear Information System (INIS)

    Karui, Masayuki; Nakamura, Takashi; Yamada, Masataka

    2003-01-01

    Sludge in Steam Generator (SG) should be removed as completely as possible because it can lead both to a corrosive environment and thermal degradation. Conventionally, water jet cleaning or lancing is carried out to remove sludge deposited on the upper Tube Support Plates (TSPs) and the Top of the Tubesheet (TTS). These cleaning methods are usually effective but they require much time and the cleaning equipment is complex. On the other hand, a High Bundle Flush (HBF) operation can remove soft sludge from the upper part of the SG secondary side more quickly and using a more simple device. A HBF is intended to wash off soft sludge that has accumulated on the upper TSPs. This sludge is flushed down to the TTS by the action of a large volume of water (6 tons/minute) introduced at the SG steam separators. Sludge Lancing performed following the HBF removes the sludge that has been flushed down to the TTS. Nuclear Engineering Ltd. carried out development of the HBF cleaning system for Japan's SGs. We also conducted theoretical safety analyses of the HBF and mock-up testing. These analyses and testing confirmed that the HBF had no adverse effects on any of the SG secondary side components. A HBF was carried out for the ''B'' SG at Ohi unit 1 of Kansai Electric Power Co, Inc. (KEPCO). Sludge removed from the ''B'' SG where the HBF was performed was three or four times the amount removed from the other SGs where only Sludge Lancing was performed. Furthermore, the HBF required only 4 days. The performance of the HBF system at Ohi unit 1 confirmed that a HBF could easily and safely removes much of sludge from the SG, even in a SG that has a small sludge inventory. In the future, application of a HBF in other SGs with larger sludge inventories is expected to remove much greater volumes of sludge. (author)

  16. High-precision abundances of elements in Kepler LEGACY stars. Verification of trends with stellar age

    Science.gov (United States)

    Nissen, P. E.; Silva Aguirre, V.; Christensen-Dalsgaard, J.; Collet, R.; Grundahl, F.; Slumstrup, D.

    2017-12-01

    Context. A previous study of solar twin stars has revealed the existence of correlations between some abundance ratios and stellar age providing new knowledge about nucleosynthesis and Galactic chemical evolution. Aims: High-precision abundances of elements are determined for stars with asteroseismic ages in order to test the solar twin relations. Methods: HARPS-N spectra with signal-to-noise ratios S/N ≳ 250 and MARCS model atmospheres were used to derive abundances of C, O, Na, Mg, Al, Si, Ca, Ti, Cr, Fe, Ni, Zn, and Y in ten stars from the Kepler LEGACY sample (including the binary pair 16 Cyg A and B) selected to have metallicities in the range - 0.15 LTE iron abundances derived from Fe I and Fe II lines. Available non-LTE corrections were also applied when deriving abundances of the other elements. Results: The abundances of the Kepler stars support the [X/Fe]-age relations previously found for solar twins. [Mg/Fe], [Al/Fe], and [Zn/Fe] decrease by 0.1 dex over the lifetime of the Galactic thin disk due to delayed contribution of iron from Type Ia supernovae relative to prompt production of Mg, Al, and Zn in Type II supernovae. [Y/Mg] and [Y/Al], on the other hand, increase by 0.3 dex, which can be explained by an increasing contribution of s-process elements from low-mass AGB stars as time goes on. The trends of [C/Fe] and [O/Fe] are more complicated due to variations of the ratio between refractory and volatile elements among stars of similar age. Two stars with about the same age as the Sun show very different trends of [X/H] as a function of elemental condensation temperature Tc and for 16 Cyg, the two components have an abundance difference, which increases with Tc. These anomalies may be connected to planet-star interactions. Based on spectra obtained with HARPS-N@TNG under programme A33TAC_1.Tables 1 and 2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc

  17. Integration of PET-CT and cone-beam CT for image-guided radiotherapy with high image quality and registration accuracy

    Science.gov (United States)

    Wu, T.-H.; Liang, C.-H.; Wu, J.-K.; Lien, C.-Y.; Yang, B.-H.; Huang, Y.-H.; Lee, J. J. S.

    2009-07-01

    Hybrid positron emission tomography-computed tomography (PET-CT) system enhances better differentiation of tissue uptake of 18F-fluorodeoxyglucose (18F-FDG) and provides much more diagnostic value in the non-small-cell lung cancer and nasopharyngeal carcinoma (NPC). In PET-CT, high quality CT images not only offer diagnostic value on anatomic delineation of the tissues but also shorten the acquisition time for attenuation correction (AC) compared with PET-alone imaging. The linear accelerators equipped with the X-ray cone-beam computed tomography (CBCT) imaging system for image-guided radiotherapy (IGRT) provides excellent verification on position setup error. The purposes of our study were to optimize the CT acquisition protocols of PET-CT and to integrate the PET-CT and CBCT for IGRT. The CT imaging parameters were modified in PET-CT for increasing the image quality in order to enhance the diagnostic value on tumour delineation. Reproducibility and registration accuracy via bone co-registration algorithm between the PET-CT and CBCT were evaluated by using a head phantom to simulate a head and neck treatment condition. Dose measurement in computed tomography dose index (CTDI) was also estimated. Optimization of the CT acquisition protocols of PET-CT was feasible in this study. Co-registration accuracy between CBCT and PET-CT on axial and helical modes was in the range of 1.06 to 2.08 and 0.99 to 2.05 mm, respectively. In our result, it revealed that the accuracy of the co-registration with CBCT on helical mode was more accurate than that on axial mode. Radiation doses in CTDI were 4.76 to 18.5 mGy and 4.83 to 18.79 mGy on axial and helical modes, respectively. Registration between PET-CT and CBCT is a state-of-the-art registration technology which could provide much information on diagnosis and accurate tumour contouring on radiotherapy while implementing radiotherapy procedures. This novelty technology of PET-CT and cone-beam CT integration for IGRT may have a

  18. Integration of PET-CT and cone-beam CT for image-guided radiotherapy with high image quality and registration accuracy

    International Nuclear Information System (INIS)

    Wu, T-H; Liang, C-H; Wu, J-K; Lien, C-Y; Yang, B-H; Lee, J J S; Huang, Y-H

    2009-01-01

    Hybrid positron emission tomography-computed tomography (PET-CT) system enhances better differentiation of tissue uptake of 18 F-fluorodeoxyglucose ( 18 F-FDG) and provides much more diagnostic value in the non-small-cell lung cancer and nasopharyngeal carcinoma (NPC). In PET-CT, high quality CT images not only offer diagnostic value on anatomic delineation of the tissues but also shorten the acquisition time for attenuation correction (AC) compared with PET-alone imaging. The linear accelerators equipped with the X-ray cone-beam computed tomography (CBCT) imaging system for image-guided radiotherapy (IGRT) provides excellent verification on position setup error. The purposes of our study were to optimize the CT acquisition protocols of PET-CT and to integrate the PET-CT and CBCT for IGRT. The CT imaging parameters were modified in PET-CT for increasing the image quality in order to enhance the diagnostic value on tumour delineation. Reproducibility and registration accuracy via bone co-registration algorithm between the PET-CT and CBCT were evaluated by using a head phantom to simulate a head and neck treatment condition. Dose measurement in computed tomography dose index (CTDI) was also estimated. Optimization of the CT acquisition protocols of PET-CT was feasible in this study. Co-registration accuracy between CBCT and PET-CT on axial and helical modes was in the range of 1.06 to 2.08 and 0.99 to 2.05 mm, respectively. In our result, it revealed that the accuracy of the co-registration with CBCT on helical mode was more accurate than that on axial mode. Radiation doses in CTDI were 4.76 to 18.5 mGy and 4.83 to 18.79 mGy on axial and helical modes, respectively. Registration between PET-CT and CBCT is a state-of-the-art registration technology which could provide much information on diagnosis and accurate tumour contouring on radiotherapy while implementing radiotherapy procedures. This novelty technology of PET-CT and cone-beam CT integration for IGRT may have a

  19. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    Science.gov (United States)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    The Solar Dynamics Observatory (SDO) is a NASA spacecraft designed to study the Sun. It was launched on February 11, 2010 into a geosynchronous orbit, and uses a suite of attitude sensors and actuators to finely point the spacecraft at the Sun. SDO has three science instruments: the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). SDO uses two High Gain Antennas (HGAs) to send science data to a dedicated ground station in White Sands, New Mexico. In order to meet the science data capture budget, the HGAs must be able to transmit data to the ground for a very large percentage of the time. Each HGA is a dual-axis antenna driven by stepper motors. Both antennas transmit data at all times, but only a single antenna is required in order to meet the transmission rate requirement. For portions of the year, one antenna or the other has an unobstructed view of the White Sands ground station. During other periods, however, the view from both antennas to the Earth is blocked for different portions of the day. During these times of blockage, the two HGAs take turns pointing to White Sands, with the other antenna pointing out to space. The HGAs handover White Sands transmission responsibilities to the unblocked antenna. There are two handover seasons per year, each lasting about 72 days, where the antennas hand off control every twelve hours. The non-tracking antenna slews back to the ground station by following a ground commanded trajectory and arrives approximately 5 minutes before the formerly tracking antenna slews away to point out into space. The SDO Attitude Control System (ACS) runs at 5 Hz, and the HGA Gimbal Control Electronics (GCE) run at 200 Hz. There are 40 opportunities for the gimbals to step each ACS cycle, with a hardware limitation of no more than one step every three GCE cycles. The ACS calculates the desired gimbal motion for tracking the ground station or for slewing

  20. Analysis of the plasmodium falciparum proteome by high-accuracy mass spectrometry

    DEFF Research Database (Denmark)

    Lasonder, Edwin; Ishihama, Yasushi; Andersen, Jens S

    2002-01-01

    -accuracy (average deviation less than 0.02 Da at 1,000 Da) mass spectrometric proteome analysis of selected stages of the human malaria parasite Plasmodium falciparum. The analysis revealed 1,289 proteins of which 714 proteins were identified in asexual blood stages, 931 in gametocytes and 645 in gametes. The last...

  1. High-accuracy interferometric measurements of flatness and parallelism of a step gauge

    CSIR Research Space (South Africa)

    Kruger, OA

    2001-01-01

    Full Text Available The most commonly used method in the calibration of step gauges is the coordinate measuring machine (CMM), equipped with a laser interferometer for the highest accuracy. This paper describes a modification to a length-bar measuring machine...

  2. Design exploration and verification platform, based on high-level modeling and FPGA prototyping, for fast and flexible digital communication in physics experiments

    International Nuclear Information System (INIS)

    Magazzù, G; Borgese, G; Costantino, N; Fanucci, L; Saponara, S; Incandela, J

    2013-01-01

    In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.

  3. Design exploration and verification platform, based on high-level modeling and FPGA prototyping, for fast and flexible digital communication in physics experiments

    Science.gov (United States)

    Magazzù, G.; Borgese, G.; Costantino, N.; Fanucci, L.; Incandela, J.; Saponara, S.

    2013-02-01

    In many research fields as high energy physics (HEP), astrophysics, nuclear medicine or space engineering with harsh operating conditions, the use of fast and flexible digital communication protocols is becoming more and more important. The possibility to have a smart and tested top-down design flow for the design of a new protocol for control/readout of front-end electronics is very useful. To this aim, and to reduce development time, costs and risks, this paper describes an innovative design/verification flow applied as example case study to a new communication protocol called FF-LYNX. After the description of the main FF-LYNX features, the paper presents: the definition of a parametric SystemC-based Integrated Simulation Environment (ISE) for high-level protocol definition and validation; the set up of figure of merits to drive the design space exploration; the use of ISE for early analysis of the achievable performances when adopting the new communication protocol and its interfaces for a new (or upgraded) physics experiment; the design of VHDL IP cores for the TX and RX protocol interfaces; their implementation on a FPGA-based emulator for functional verification and finally the modification of the FPGA-based emulator for testing the ASIC chipset which implements the rad-tolerant protocol interfaces. For every step, significant results will be shown to underline the usefulness of this design and verification approach that can be applied to any new digital protocol development for smart detectors in physics experiments.

  4. [Accuracy of placenta accreta prenatal diagnosis by ultrasound and MRI in a high-risk population].

    Science.gov (United States)

    Daney de Marcillac, F; Molière, S; Pinton, A; Weingertner, A-S; Fritz, G; Viville, B; Roedlich, M-N; Gaudineau, A; Sananes, N; Favre, R; Nisand, I; Langer, B

    2016-02-01

    Main objective was to compare accuracy of ultrasonography and MRI for antenatal diagnosis of placenta accreta. Secondary objectives were to specify the most common sonographic and RMI signs associated with diagnosis of placenta accreta. This retrospective study used data collected from all potential cases of placenta accreta (patients with an anterior placenta praevia with history of scarred uterus) admitted from 01/2010 to 12/2014 in a level III maternity unit in Strasbourg, France. High-risk patients beneficiated antenatally from ultrasonography and MRI. Sonographic signs registered were: abnormal placental lacunae, increased vascularity on color Doppler, absence of the retroplacental clear space, interrupted bladder line. MRI signs registered were: abnormal uterine bulging, intraplacental bands of low signal intensity on T2-weighted images, increased vascularity, heterogeneous signal of the placenta on T2-weighed, interrupted bladder line, protrusion of the placenta into the cervix. Diagnosis of placenta accreta was confirmed histologically after hysterectomy or clinically in case of successful conservative treatment. Twenty-two potential cases of placenta accreta were referred to our center and underwent both ultrasonography and MRI. All cases of placenta accreta had a placenta praevia associated with history of scarred uterus. Sensibility and specificity for ultrasonography were, respectively, 0.92 and 0.67, for MRI 0.84 and 0.78 without significant difference (p>0.05). The most relevant signs associated with diagnosis of placenta accreta in ultrasonography were increased vascularity on color Doppler (sensibility 0.85/specificity 0.78), abnormal placental lacunae (sensibility 0.92/specificity 0.55) and loss of retroplacental clear space (sensibility 0.76/specificity 1.0). The most relevant signs in MRI were: abnormal uterine bulging (sensitivity 0.92/specificity 0.89), dark intraplacental bands on T2-weighted images (sensitivity 0.83/specificity 0.80) or

  5. Swarm Verification

    Science.gov (United States)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  6. Improved verification methods for safeguards verifications at enrichment plants

    International Nuclear Information System (INIS)

    Lebrun, A.; Kane, S. C.; Bourva, L.; Poirier, S.; Loghin, N. E.; Langlands, D.

    2009-01-01

    The International Atomic Energy Agency (IAEA) has initiated a coordinated research and development programme to improve its verification methods and equipment applicable to enrichment plants. The programme entails several individual projects to meet the objectives of the IAEA Safeguards Model Approach for Gas Centrifuge Enrichment Plants updated in 2006. Upgrades of verification methods to confirm the absence of HEU (highly enriched uranium) production have been initiated and, in particular, the Cascade Header Enrichment Monitor (CHEM) has been redesigned to reduce its weight and incorporate an electrically cooled germanium detector. Such detectors are also introduced to improve the attended verification of UF 6 cylinders for the verification of the material balance. Data sharing of authenticated operator weighing systems such as accountancy scales and process load cells is also investigated as a cost efficient and an effective safeguards measure combined with unannounced inspections, surveillance and non-destructive assay (NDA) measurement. (authors)

  7. Improved verification methods for safeguards verifications at enrichment plants

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, A.; Kane, S. C.; Bourva, L.; Poirier, S.; Loghin, N. E.; Langlands, D. [Department of Safeguards, International Atomic Energy Agency, Wagramer Strasse 5, A1400 Vienna (Austria)

    2009-07-01

    The International Atomic Energy Agency (IAEA) has initiated a coordinated research and development programme to improve its verification methods and equipment applicable to enrichment plants. The programme entails several individual projects to meet the objectives of the IAEA Safeguards Model Approach for Gas Centrifuge Enrichment Plants updated in 2006. Upgrades of verification methods to confirm the absence of HEU (highly enriched uranium) production have been initiated and, in particular, the Cascade Header Enrichment Monitor (CHEM) has been redesigned to reduce its weight and incorporate an electrically cooled germanium detector. Such detectors are also introduced to improve the attended verification of UF{sub 6} cylinders for the verification of the material balance. Data sharing of authenticated operator weighing systems such as accountancy scales and process load cells is also investigated as a cost efficient and an effective safeguards measure combined with unannounced inspections, surveillance and non-destructive assay (NDA) measurement. (authors)

  8. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    International Nuclear Information System (INIS)

    Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo

    2014-01-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting

  9. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)

    2014-07-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.

  10. Ontology Matching with Semantic Verification.

    Science.gov (United States)

    Jean-Mary, Yves R; Shironoshita, E Patrick; Kabuka, Mansur R

    2009-09-01

    ASMOV (Automated Semantic Matching of Ontologies with Verification) is a novel algorithm that uses lexical and structural characteristics of two ontologies to iteratively calculate a similarity measure between them, derives an alignment, and then verifies it to ensure that it does not contain semantic inconsistencies. In this paper, we describe the ASMOV algorithm, and then present experimental results that measure its accuracy using the OAEI 2008 tests, and that evaluate its use with two different thesauri: WordNet, and the Unified Medical Language System (UMLS). These results show the increased accuracy obtained by combining lexical, structural and extensional matchers with semantic verification, and demonstrate the advantage of using a domain-specific thesaurus for the alignment of specialized ontologies.

  11. SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation

    Directory of Open Access Journals (Sweden)

    Euiho Kim

    2017-09-01

    Full Text Available In the Federal Aviation Administration’s (FAA performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0–77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment.

  12. Automatic J–A Model Parameter Tuning Algorithm for High Accuracy Inrush Current Simulation

    Directory of Open Access Journals (Sweden)

    Xishan Wen

    2017-04-01

    Full Text Available Inrush current simulation plays an important role in many tasks of the power system, such as power transformer protection. However, the accuracy of the inrush current simulation can hardly be ensured. In this paper, a Jiles–Atherton (J–A theory based model is proposed to simulate the inrush current of power transformers. The characteristics of the inrush current curve are analyzed and results show that the entire inrush current curve can be well featured by the crest value of the first two cycles. With comprehensive consideration of both of the features of the inrush current curve and the J–A parameters, an automatic J–A parameter estimation algorithm is proposed. The proposed algorithm can obtain more reasonable J–A parameters, which improve the accuracy of simulation. Experimental results have verified the efficiency of the proposed algorithm.

  13. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    Science.gov (United States)

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  14. Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications

    OpenAIRE

    Van-Tang PHAM; Dinh-Chinh NGUYEN; Quang-Huy TRAN; Duc-Trinh CHU; Duc-Tan TRAN

    2015-01-01

    Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The se...

  15. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Directory of Open Access Journals (Sweden)

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  16. New perspectives for high accuracy SLR with second generation geodesic satellites

    Science.gov (United States)

    Lund, Glenn

    1993-01-01

    This paper reports on the accuracy limitations imposed by geodesic satellite signatures, and on the potential for achieving millimetric performances by means of alternative satellite concepts and an optimized 2-color system tradeoff. Long distance laser ranging, when performed between a ground (emitter/receiver) station and a distant geodesic satellite, is now reputed to enable short arc trajectory determinations to be achieved with an accuracy of 1 to 2 centimeters. This state-of-the-art accuracy is limited principally by the uncertainties inherent to single-color atmospheric path length correction. Motivated by the study of phenomena such as postglacial rebound, and the detailed analysis of small-scale volcanic and strain deformations, the drive towards millimetric accuracies will inevitably be felt. With the advent of short pulse (less than 50 ps) dual wavelength ranging, combined with adequate detection equipment (such as a fast-scanning streak camera or ultra-fast solid-state detectors) the atmospheric uncertainty could potentially be reduced to the level of a few millimeters, thus, exposing other less significant error contributions, of which by far the most significant will then be the morphology of the retroreflector satellites themselves. Existing geodesic satellites are simply dense spheres, several 10's of cm in diameter, encrusted with a large number (426 in the case of LAGEOS) of small cube-corner reflectors. A single incident pulse, thus, results in a significant number of randomly phased, quasi-simultaneous return pulses. These combine coherently at the receiver to produce a convolved interference waveform which cannot, on a shot to shot basis, be accurately and unambiguously correlated to the satellite center of mass. This paper proposes alternative geodesic satellite concepts, based on the use of a very small number of cube-corner retroreflectors, in which the above difficulties are eliminated while ensuring, for a given emitted pulse, the return

  17. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  18. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Science.gov (United States)

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  19. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Directory of Open Access Journals (Sweden)

    Xiangbin Zhu

    Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  20. High accuracy microwave frequency measurement based on single-drive dual-parallel Mach-Zehnder modulator

    DEFF Research Database (Denmark)

    Zhao, Ying; Pang, Xiaodan; Deng, Lei

    2011-01-01

    A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....

  1. THE EFFECT OF MODERATE AND HIGH-INTENSITY FATIGUE ON GROUNDSTROKE ACCURACY IN EXPERT AND NON-EXPERT TENNIS PLAYERS

    Directory of Open Access Journals (Sweden)

    Mark Lyons

    2013-06-01

    Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on

  2. High accuracy prediction of beta-turns and their types using propensities and multiple alignments.

    Science.gov (United States)

    Fuchs, Patrick F J; Alix, Alain J P

    2005-06-01

    We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym

  3. Fault-specific verification (FSV) - An alternative VV ampersand T strategy for high reliability nuclear software systems

    International Nuclear Information System (INIS)

    Miller, L.A.

    1994-01-01

    The author puts forth an argument that digital instrumentation and control systems can be safely applied in the nuclear industry, but it will require changes to the way software for such systems is developed and tested. He argues for a fault-specific verification procedure to be applied to software development. This plan includes enumerating and classifying all software faults at all levels of the product development, over the whole development process. While collecting this data, develop and validate different methods for software verification, validation and testing, and apply them against all the detected faults. Force all of this development toward an automated product for doing this testing. Continue to develop, expand, test, and share these testing methods across a wide array of software products

  4. MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices

    OpenAIRE

    Chen, Sheng; Liu, Yang; Gao, Xiang; Han, Zhen

    2018-01-01

    In this paper, we proposed a class of extremely efficient CNN models, MobileFaceNets, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. We first make a simple analysis on the weakness of common mobile networks for face verification. The weakness has been well overcome by our specifically designed MobileFaceNets. Under the same experimental conditions, our MobileFaceNets achieve significantly sup...

  5. Accuracy of applicator tip reconstruction in MRI-guided interstitial 192Ir-high-dose-rate brachytherapy of liver tumors

    International Nuclear Information System (INIS)

    Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens

    2015-01-01

    Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo

  6. Horizontal Positional Accuracy of Google Earth’s High-Resolution Imagery Archive

    Directory of Open Access Journals (Sweden)

    David Potere

    2008-12-01

    Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the Earth’s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the world’s peri-urban areas.

  7. High accuracy mapping with cartographic assessment for a fixed-wing remotely piloted aircraft system

    Science.gov (United States)

    Alves Júnior, Leomar Rufino; Ferreira, Manuel Eduardo; Côrtes, João Batista Ramos; de Castro Jorge, Lúcio André

    2018-01-01

    The lack of updated maps on large scale representations has encouraged the use of remotely piloted aircraft systems (RPAS) to generate maps for a wide range of professionals. However, some questions arise: do the orthomosaics generated by these systems have the cartographic precision required to use them? Which problems can be identified in stitching orthophotos to generate orthomosaics? To answer these questions, an aerophotogrammetric survey was conducted in an environmental conservation unit in the city of Goiânia. The flight plan was set up using the E-motion software, provided by Sensefly-a Swiss manufacturer of the RPAS Swinglet CAM used in this work. The camera installed in the RPAS was the Canon IXUS 220 HS, with the number of pixels in the sensor array of 12.1 megapixel, complementary metal oxide semiconductor 1 ∶ 2.3 ? (4000 × 3000 pixel), horizontal and vertical pixel sizes of 1.54 μm. Using the orthophotos, four orthomosaics were generated in the Pix4D mapper software. The first orthomosaic was generated without using the control points. The other three mosaics were generated using 4, 8, and 16 premarked ground control points. To check the precision and accuracy of the orthomosaics, 46 premarked targets were uniformly distributed in the block. The three-dimensional (3-D) coordinates of the premarked targets were read on the orthomosaic and compared with the coordinates obtained by the geodetic survey real-time kinematic positioning method using the global navigation satellite system receiver signals. The cartographic accuracy standard was evaluated by discrepancies between these coordinates. The bias was analyzed by the Student's t test and the accuracy by the chi-square probability considering the orthomosaic on a scale of 1 ∶ 250, in which 90% of the points tested must have a planimetric error of control points the scale was 10-fold smaller (1 ∶ 3000).

  8. High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller

    Science.gov (United States)

    Li, Yaoling; Wu, Zhong

    2018-03-01

    The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.

  9. KLEIN: Coulomb functions for real lambda and positive energy to high accuracy

    International Nuclear Information System (INIS)

    Barnett, A.R.

    1981-01-01

    KLEIN computes relativistic Schroedinger (Klein-Gordon) equation solutions, i.e. Coulomb functions for real lambda > - 1, Fsub(lambda)(eta,x), Gsub(lambda)(eta,x), F'sub(lambda)(eta,x) and G'sub(lambda)(eta,x) for real kappa > 0 and real eta, - 10 4 4 . Hence it is also suitable for Bessel and spherical Bessel functions. Accuracies are in the range 10 -14 -10 -16 in oscillating region, and approx. equal to 10 -30 on an extended precision compiler. The program is suitable for generating Klein-Gordon wavefunctions for matching in pion and kaon physics. (orig.)

  10. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    Science.gov (United States)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  11. Cause and Cure - Deterioration in Accuracy of CFD Simulations With Use of High-Aspect-Ratio Triangular Tetrahedral Grids

    Science.gov (United States)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the

  12. Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Gangchen Hua

    2017-01-01

    Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.

  13. The use of high accuracy NAA for the certification of NIST Standard Reference Materials

    International Nuclear Information System (INIS)

    Becker, D.A.; Greenberg, R.R.; Stone, S.

    1991-01-01

    Neutron activation analysis (NAA) is only one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). We compete daily against all of the other available analytical techniques in terms of accuracy, precision, and the cost required to obtain that requisite accuracy and precision. Over the years, the authors have found that NAA can and does compete favorably with these other techniques because of its' unique capabilities for redundancy and quality assurance. Good examples are the two new NIST leaf SRMs, Apple Leaves (SRM 1515) and Peach Leaves (SRM 1547). INAA was used to measure the homogeneity of 12 elements in 15 samples of each material at the 100 mg sample size. In addition, instrumental and radiochemical NAA combined for 27 elemental determinations, out of a total of 54 elemental determinations made on each material with all NIST techniques combined. This paper describes the NIST NAA procedures used in these analyses, the quality assurance techniques employed, and the analytical results for the 24 elements determined by NAA in these new botanical SRMs. The NAA results are also compared to the final certified values for these SRMs

  14. High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".

    Science.gov (United States)

    Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo

    2004-03-01

    Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.

  15. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  16. Material integrity verification radar

    International Nuclear Information System (INIS)

    Koppenjan, S.K.

    1999-01-01

    The International Atomic Energy Agency (IAEA) has the need for verification of 'as-built' spent fuel-dry storage containers and other concrete structures. The IAEA has tasked the Special Technologies Laboratory (STL) to fabricate, test, and deploy a stepped-frequency Material Integrity Verification Radar (MIVR) system to nondestructively verify the internal construction of these containers. The MIVR system is based on previously deployed high-frequency, ground penetrating radar (GPR) systems that have been developed by STL for the U.S. Department of Energy (DOE). Whereas GPR technology utilizes microwave radio frequency energy to create subsurface images, MTVR is a variation for which the medium is concrete instead of soil. The purpose is to nondestructively verify the placement of concrete-reinforcing materials, pipes, inner liners, and other attributes of the internal construction. The MIVR system underwent an initial field test on CANDU reactor spent fuel storage canisters at Atomic Energy of Canada Limited (AECL), Chalk River Laboratories, Ontario, Canada, in October 1995. A second field test at the Embalse Nuclear Power Plant in Embalse, Argentina, was completed in May 1996. The DOE GPR also was demonstrated at the site. Data collection and analysis were performed for the Argentine National Board of Nuclear Regulation (ENREN). IAEA and the Brazilian-Argentine Agency for the Control and Accounting of Nuclear Material (ABACC) personnel were present as observers during the test. Reinforcing materials were evident in the color, two-dimensional images produced by the MIVR system. A continuous pattern of reinforcing bars was evident and accurate estimates on the spacing, depth, and size were made. The potential uses for safeguard applications were jointly discussed. The MIVR system, as successfully demonstrated in the two field tests, can be used as a design verification tool for IAEA safeguards. A deployment of MIVR for Design Information Questionnaire (DIQ

  17. Interobserver Variability and Accuracy of High-Definition Endoscopic Diagnosis for Gastric Intestinal Metaplasia among Experienced and Inexperienced Endoscopists

    Science.gov (United States)

    Hyun, Yil Sik; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo

    2013-01-01

    Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM. PMID:23678267

  18. Interobserver variability and accuracy of high-definition endoscopic diagnosis for gastric intestinal metaplasia among experienced and inexperienced endoscopists.

    Science.gov (United States)

    Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo

    2013-05-01

    Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.

  19. Geometric and dosimetric verification of step-and-shoot modulated fields with a new fast and high resolution beam imaging system

    International Nuclear Information System (INIS)

    Bindoni, Luca

    2005-01-01

    A technique for geometric and dosimetric pretreatment verification of step-and-shoot intensity modulated radiotherapy treatments (IMRT) using a beam imaging system (BIS) made up of a charge-coupled device (CCD) digital camera optically coupled with a metal-plate/phosphor screen is described. Some physical properties of BIS were investigated in order to demonstrate its capability to perform measurements with a high spatial resolution and a high sampling rate. High-speed imaging, with a minimum charge integration time on the CCD of 120 ms, can be performed. The study of the signal-to-noise ratio as a function of sampling time is presented. In-plane and cross-line pixel size was measured to be 0.368±0.004 mm/pixel, which agrees within 0.5% of the manufacturer value of 0.366 mm. Spatial linearity results are very good and there are no detectable image distortions on whole 30x30 cm 2 detector area. A software routine was written to automatically extract positions of the collimator leaves from the images of the field shaped by the multileaf collimator (MLC) and also to compare them with the coordinates from the treatment planning system (TPS), thus directly testing both the MLC positioning and the treatment parameters transfer from TPS to the linear accelerator in a fast and precise way. The dosimetric capabilities (characteristics) of the imaging device for photon beams with energies of 6 and 15 MV were studied. Additional plexiglass buildup layers, depending on x-ray energy, were needed to reach maximum efficiency. The energy dependence of the BIS response versus dose and dose rate was found to be linear over a wide range. Relative output factors of BIS as a function of field size, compared with values measured with an ionization chamber, were in good accord for smaller field sizes ≤10x10 cm 2 but showed differences up to 4% for all the energies at the respective buildup depth for bigger fields. Square field profiles at water-equivalent buildup depths, extracted

  20. Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications

    Directory of Open Access Journals (Sweden)

    Van-Tang PHAM

    2015-12-01

    Full Text Available Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The second scheme increases both the temperature working range and steady error performance of the sensor. In this scheme, we try to keep the temperature of the sensor is stable at the certain value (e.g. 25 oC by using a PID (proportional-integral-derivative controller and a heating/cooling generator. Many experiment scenarios have implemented to confirm the effectivity of these solutions.

  1. Hyperbolic Method for Dispersive PDEs: Same High-Order of Accuracy for Solution, Gradient, and Hessian

    Science.gov (United States)

    Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki

    2016-01-01

    In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.

  2. High-accuracy energy formulas for the attractive two-site Bose-Hubbard model

    Science.gov (United States)

    Ermakov, Igor; Byrnes, Tim; Bogoliubov, Nikolay

    2018-02-01

    The attractive two-site Bose-Hubbard model is studied within the framework of the analytical solution obtained by the application of the quantum inverse scattering method. The structure of the ground and excited states is analyzed in terms of solutions of Bethe equations, and an approximate solution for the Bethe roots is given. This yields approximate formulas for the ground-state energy and for the first excited-state energy. The obtained formulas work with remarkable precision for a wide range of parameters of the model, and are confirmed numerically. An expansion of the Bethe state vectors into a Fock space is also provided for evaluation of expectation values, although this does not have accuracy similar to that of the energies.

  3. Accuracy and repeatability positioning of high-performancel athe for non-circular turning

    Directory of Open Access Journals (Sweden)

    Majda Paweł

    2017-11-01

    Full Text Available This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe’s numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.

  4. Accuracy and repeatability positioning of high-performancel athe for non-circular turning

    Science.gov (United States)

    Majda, Paweł; Powałka, Bartosz

    2017-11-01

    This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe's numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.

  5. A method of high accuracy clock synchronization by frequency following with VCXO

    International Nuclear Information System (INIS)

    Ma Yichao; Wu Jie; Zhang Jie; Song Hongzhi; Kong Yang

    2011-01-01

    In this paper, the principle of the synchronous protocol of the IEEE1588 is analyzed, and the factors that affect the accuracy of synchronization is summarized. Through the hardware timer in a microcontroller, we give the exactly the time when a package is sent or received. So synchronization of the distributed clocks can reach 1 μs in this way. Another method to improve precision of the synchronization is to replace the traditional fixed frequency crystal of the slave device, which needs to follow up the master clock, by an adjustable VCXO. So it is possible to fine tune the frequency of the distributed clocks, and reduce the drift of clock, which shows great benefit for the clock synchronization. A test measurement shows the synchronization of distribute clocks can be better than 10 ns using this method, which is more accurate than the method realized by software. (authors)

  6. STTR Phase I: Low-Cost, High-Accuracy, Whole-Building Carbon Dioxide Monitoring for Demand Control Ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Hallstrom, Jason; Ni, Zheng Richard

    2018-05-15

    This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a

  7. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  8. SU-E-T-375: Evaluation of a MapCHECK2(tm) Planar 2-D Diode Array for High-Dose-Rate Brachytherapy Treatment Delivery Verifications

    Energy Technology Data Exchange (ETDEWEB)

    Macey, N; Siebert, M; Shvydka, D; Parsai, E [University of Toledo Medical Center, Toledo, OH (United States)

    2015-06-15

    Purpose: Despite improvements of HDR brachytherapy delivery systems, verification of source position is still typically based on the length of the wire reeled out relative to the parked position. Yet, the majority of errors leading to medical events in HDR treatments continue to be classified as missed targets or wrong treatment sites. We investigate the feasibility of using dose maps acquired with a two-dimensional diode array to independently verify the source locations, dwell times, and dose during an HDR treatment. Methods: Custom correction factors were integrated into frame-by-frame raw counts recorded for a Varian VariSource™ HDR afterloader Ir-192 source located at various distances in air and in solid water from a MapCHECK2™ diode array. The resultant corrected counts were analyzed to determine the dwell position locations and doses delivered. The local maxima of polynomial equations fitted to the extracted dwell dose profiles provided the X and Y coordinates while the distance to the source was determined from evaluation of the full width at half maximum (FWHM). To verify the approach, the experiment was repeated as the source was moved through dwell positions at various distances along an inclined plane, mimicking a vaginal cylinder treatment. Results: Dose map analysis was utilized to provide the coordinates of the source and dose delivered over each dwell position. The accuracy in determining source dwell positions was found to be +/−1.0 mm of the preset values, and doses within +/−3% of those calculated by the BrachyVision™ treatment planning system for all measured distances. Conclusion: Frame-by-frame data furnished by a 2 -D diode array can be used to verify the dwell positions and doses delivered by the HDR source over the course of treatment. Our studies have verified that measurements provided by the MapCHECK2™ can be used as a routine QA tool for HDR treatment delivery verification.

  9. High-dose intensity-modulated radiotherapy for prostate cancer using daily fiducial marker-based position verification: acute and late toxicity in 331 patients

    International Nuclear Information System (INIS)

    Lips, Irene M; Dehnad, Homan; Gils, Carla H van; Boeken Kruger, Arto E; Heide, Uulke A van der; Vulpen, Marco van

    2008-01-01

    We evaluated the acute and late toxicity after high-dose intensity-modulated radiotherapy (IMRT) with fiducial marker-based position verification for prostate cancer. Between 2001 and 2004, 331 patients with prostate cancer received 76 Gy in 35 fractions using IMRT combined with fiducial marker-based position verification. The symptoms before treatment (pre-treatment) and weekly during treatment (acute toxicity) were scored using the Common Toxicity Criteria (CTC). The goal was to score late toxicity according to the Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer (RTOG/EORTC) scale with a follow-up time of at least three years. Twenty-two percent of the patients experienced pre-treatment grade ≥ 2 genitourinary (GU) complaints and 2% experienced grade 2 gastrointestinal (GI) complaints. Acute grade 2 GU and GI toxicity occurred in 47% and 30%, respectively. Only 3% of the patients developed acute grade 3 GU and no grade ≥ 3 GI toxicity occurred. After a mean follow-up time of 47 months with a minimum of 31 months for all patients, the incidence of late grade 2 GU and GI toxicity was 21% and 9%, respectively. Grade ≥ 3 GU and GI toxicity rates were 4% and 1%, respectively, including one patient with a rectal fistula and one patient with a severe hemorrhagic cystitis (both grade 4). In conclusion, high-dose intensity-modulated radiotherapy with fiducial marker-based position verification is well tolerated. The low grade ≥ 3 toxicity allows further dose escalation if the same dose constraints for the organs at risk will be used

  10. High-dose intensity-modulated radiotherapy for prostate cancer using daily fiducial marker-based position verification: acute and late toxicity in 331 patients

    Directory of Open Access Journals (Sweden)

    Boeken Kruger Arto E

    2008-05-01

    Full Text Available Abstract We evaluated the acute and late toxicity after high-dose intensity-modulated radiotherapy (IMRT with fiducial marker-based position verification for prostate cancer. Between 2001 and 2004, 331 patients with prostate cancer received 76 Gy in 35 fractions using IMRT combined with fiducial marker-based position verification. The symptoms before treatment (pre-treatment and weekly during treatment (acute toxicity were scored using the Common Toxicity Criteria (CTC. The goal was to score late toxicity according to the Radiation Therapy Oncology Group/European Organization for Research and Treatment of Cancer (RTOG/EORTC scale with a follow-up time of at least three years. Twenty-two percent of the patients experienced pre-treatment grade ≥ 2 genitourinary (GU complaints and 2% experienced grade 2 gastrointestinal (GI complaints. Acute grade 2 GU and GI toxicity occurred in 47% and 30%, respectively. Only 3% of the patients developed acute grade 3 GU and no grade ≥ 3 GI toxicity occurred. After a mean follow-up time of 47 months with a minimum of 31 months for all patients, the incidence of late grade 2 GU and GI toxicity was 21% and 9%, respectively. Grade ≥ 3 GU and GI toxicity rates were 4% and 1%, respectively, including one patient with a rectal fistula and one patient with a severe hemorrhagic cystitis (both grade 4. In conclusion, high-dose intensity-modulated radiotherapy with fiducial marker-based position verification is well tolerated. The low grade ≥ 3 toxicity allows further dose escalation if the same dose constraints for the organs at risk will be used.

  11. Radiometric inter-sensor cross-calibration uncertainty using a traceable high accuracy reference hyperspectral imager

    Science.gov (United States)

    Gorroño, Javier; Banks, Andrew C.; Fox, Nigel P.; Underwood, Craig

    2017-08-01

    Optical earth observation (EO) satellite sensors generally suffer from drifts and biases relative to their pre-launch calibration, caused by launch and/or time in the space environment. This places a severe limitation on the fundamental reliability and accuracy that can be assigned to satellite derived information, and is particularly critical for long time base studies for climate change and enabling interoperability and Analysis Ready Data. The proposed TRUTHS (Traceable Radiometry Underpinning Terrestrial and Helio-Studies) mission is explicitly designed to address this issue through re-calibrating itself directly to a primary standard of the international system of units (SI) in-orbit and then through the extension of this SI-traceability to other sensors through in-flight cross-calibration using a selection of Committee on Earth Observation Satellites (CEOS) recommended test sites. Where the characteristics of the sensor under test allows, this will result in a significant improvement in accuracy. This paper describes a set of tools, algorithms and methodologies that have been developed and used in order to estimate the radiometric uncertainty achievable for an indicative target sensor through in-flight cross-calibration using a well-calibrated hyperspectral SI-traceable reference sensor with observational characteristics such as TRUTHS. In this study, Multi-Spectral Imager (MSI) of Sentinel-2 and Landsat-8 Operational Land Imager (OLI) is evaluated as an example, however the analysis is readily translatable to larger-footprint sensors such as Sentinel-3 Ocean and Land Colour Instrument (OLCI) and Visible Infrared Imaging Radiometer Suite (VIIRS). This study considers the criticality of the instrumental and observational characteristics on pixel level reflectance factors, within a defined spatial region of interest (ROI) within the target site. It quantifies the main uncertainty contributors in the spectral, spatial, and temporal domains. The resultant tool

  12. Quantitative analysis of patient-specific dosimetric IMRT verification

    International Nuclear Information System (INIS)

    Budgell, G J; Perrin, B A; Mott, J H L; Fairfoul, J; Mackay, R I

    2005-01-01

    Patient-specific dosimetric verification methods for IMRT treatments are variable, time-consuming and frequently qualitative, preventing evidence-based reduction in the amount of verification performed. This paper addresses some of these issues by applying a quantitative analysis parameter to the dosimetric verification procedure. Film measurements in different planes were acquired for a series of ten IMRT prostate patients, analysed using the quantitative parameter, and compared to determine the most suitable verification plane. Film and ion chamber verification results for 61 patients were analysed to determine long-term accuracy, reproducibility and stability of the planning and delivery system. The reproducibility of the measurement and analysis system was also studied. The results show that verification results are strongly dependent on the plane chosen, with the coronal plane particularly insensitive to delivery error. Unexpectedly, no correlation could be found between the levels of error in different verification planes. Longer term verification results showed consistent patterns which suggest that the amount of patient-specific verification can be safely reduced, provided proper caution is exercised: an evidence-based model for such reduction is proposed. It is concluded that dose/distance to agreement (e.g., 3%/3 mm) should be used as a criterion of acceptability. Quantitative parameters calculated for a given criterion of acceptability should be adopted in conjunction with displays that show where discrepancies occur. Planning and delivery systems which cannot meet the required standards of accuracy, reproducibility and stability to reduce verification will not be accepted by the radiotherapy community

  13. Verification of a CT scanner using a miniature step gauge

    DEFF Research Database (Denmark)

    Cantatore, Angela; Andreasen, J.L.; Carmignato, S.

    2011-01-01

    The work deals with performance verification of a CT scanner using a 42mm miniature replica step gauge developed for optical scanner verification. Errors quantification and optimization of CT system set-up in terms of resolution and measurement accuracy are fundamental for use of CT scanning...

  14. Palmprint Based Verification System Using SURF Features

    Science.gov (United States)

    Srinivas, Badrinath G.; Gupta, Phalguni

    This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.

  15. The high accuracy data processing system of laser interferometry signals based on MSP430

    Science.gov (United States)

    Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong

    2009-07-01

    Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.

  16. A new phase-shift microscope designed for high accuracy stitching interferometry

    International Nuclear Information System (INIS)

    Thomasset, Muriel; Idir, Mourad; Polack, François; Bray, Michael; Servant, Jean-Jacques

    2013-01-01

    Characterizing nanofocusing X-ray mirrors for the soon coming nano-imaging beamlines of synchrotron light sources motivates the development of new instruments with improved performances. The sensitivity and accuracy goal is now fixed well under the nm level and, at the same time, the spatial frequency range of the measurement should be pushed toward 50 mm −1 . SOLEIL synchrotron facility has therefore undertaken to equip with an interferential microscope suitable for stitching interferometry at this performance level. In order to keep control on the whole metrology chain it was decided to build a custom instrument in partnership with two small optics companies EOTECH and MBO. The new instrument is a Michelson micro-interferometer equipped with a custom-designed telecentric objective. It achieves the large depth of focus suitable for performing reliable calibrations and measurements. The concept has been validated with a predevelopment set-up, delivered in July 2010, which showed a static repeatability below 1 nm PV despite a non-thermally stabilized environment. The final instrument was delivered early this year and was installed inside SOLEIL's controlled environment facility, where thorough characterization tests are under way. Latest test results and first stitching measurements are presented

  17. Experimental study of very low permeability rocks using a high accuracy permeameter

    International Nuclear Information System (INIS)

    Larive, Elodie

    2002-01-01

    The measurement of fluid flow through 'tight' rocks is important to provide a better understanding of physical processes involved in several industrial and natural problems. These include deep nuclear waste repositories, management of aquifers, gas, petroleum or geothermal reservoirs, or earthquakes prevention. The major part of this work consisted of the design, construction and use of an elaborate experimental apparatus allowing laboratory permeability measurements (fluid flow) of very low permeability rocks, on samples at a centimetric scale, to constrain their hydraulic behaviour at realistic in-situ conditions. The accuracy permeameter allows the use of several measurement methods, the steady-state flow method, the transient pulse method, and the sinusoidal pore pressure oscillation method. Measurements were made with the pore pressure oscillation method, using different waveform periods, at several pore and confining pressure conditions, on different materials. The permeability of one natural standard, Westerly granite, and an artificial one, a micro-porous cement, were measured, and results obtained agreed with previous measurements made on these materials showing the reliability of the permeameter. A study of a Yorkshire sandstone shows a relationship between rock microstructure, permeability anisotropy and thermal cracking. Microstructure, porosity and permeability concepts, and laboratory permeability measurements specifications are presented, the permeameter is described, and then permeability results obtained on the investigated materials are reported [fr

  18. Demonstrating High-Accuracy Orbital Access Using Open-Source Tools

    Science.gov (United States)

    Gilbertson, Christian; Welch, Bryan

    2017-01-01

    Orbit propagation is fundamental to almost every space-based analysis. Currently, many system analysts use commercial software to predict the future positions of orbiting satellites. This is one of many capabilities that can replicated, with great accuracy, without using expensive, proprietary software. NASAs SCaN (Space Communication and Navigation) Center for Engineering, Networks, Integration, and Communications (SCENIC) project plans to provide its analysis capabilities using a combination of internal and open-source software, allowing for a much greater measure of customization and flexibility, while reducing recurring software license costs. MATLAB and the open-source Orbit Determination Toolbox created by Goddard Space Flight Center (GSFC) were utilized to develop tools with the capability to propagate orbits, perform line-of-sight (LOS) availability analyses, and visualize the results. The developed programs are modular and can be applied for mission planning and viability analysis in a variety of Solar System applications. The tools can perform 2 and N-body orbit propagation, find inter-satellite and satellite to ground station LOS access (accounting for intermediate oblate spheroid body blocking, geometric restrictions of the antenna field-of-view (FOV), and relativistic corrections), and create animations of planetary movement, satellite orbits, and LOS accesses. The code is the basis for SCENICs broad analysis capabilities including dynamic link analysis, dilution-of-precision navigation analysis, and orbital availability calculations.

  19. A study for high accuracy measurement of residual stress by deep hole drilling technique

    Science.gov (United States)

    Kitano, Houichi; Okano, Shigetaka; Mochizuki, Masahito

    2012-08-01

    The deep hole drilling technique (DHD) received much attention in recent years as a method for measuring through-thickness residual stresses. However, some accuracy problems occur when residual stress evaluation is performed by the DHD technique. One of the reasons is that the traditional DHD evaluation formula applies to the plane stress condition. The second is that the effects of the plastic deformation produced in the drilling process and the deformation produced in the trepanning process are ignored. In this study, a modified evaluation formula, which is applied to the plane strain condition, is proposed. In addition, a new procedure is proposed which can consider the effects of the deformation produced in the DHD process by investigating the effects in detail by finite element (FE) analysis. Then, the evaluation results obtained by the new procedure are compared with that obtained by traditional DHD procedure by FE analysis. As a result, the new procedure evaluates the residual stress fields better than the traditional DHD procedure when the measuring object is thick enough that the stress condition can be assumed as the plane strain condition as in the model used in this study.

  20. On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro

    2013-01-01

    This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV. (paper)

  1. Software verification for nuclear industry

    International Nuclear Information System (INIS)

    Wilburn, N.P.

    1985-08-01

    Why verification of software products throughout the software life cycle is necessary is considered. Concepts of verification, software verification planning, and some verification methodologies for products generated throughout the software life cycle are then discussed

  2. Verification and disarmament

    Energy Technology Data Exchange (ETDEWEB)

    Blix, H. [IAEA, Vienna (Austria)

    1998-07-01

    The main features are described of the IAEA safeguards verification system that non-nuclear weapon states parties of the NPT are obliged to accept. Verification activities/problems in Iraq and North Korea are discussed.

  3. Verification and disarmament

    International Nuclear Information System (INIS)

    Blix, H.

    1998-01-01

    The main features are described of the IAEA safeguards verification system that non-nuclear weapon states parties of the NPT are obliged to accept. Verification activities/problems in Iraq and North Korea are discussed

  4. ISPA - a high accuracy X-ray and gamma camera Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    ISPA offers ... Ten times better resolution than Anger cameras High efficiency single gamma counting Noise reduction by sensitivity to gamma energy ...for Single Photon Emission Computed Tomography (SPECT)

  5. Treatment accuracy of fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Kumar, Shaleen; Burke, Kevin; Nalder, Colin; Jarrett, Paula; Mubata, Cephas; A'Hern, Roger; Humphreys, Mandy; Bidmead, Margaret; Brada, Michael

    2005-01-01

    Background and purpose: To assess the geometric accuracy of the delivery of fractionated stereotactic radiotherapy (FSRT) for brain tumours using the Gill-Thomas-Cosman (GTC) relocatable frame. Accuracy of treatment delivery was measured via portal images acquired with an amorphous silicon based electronic portal imager (EPI). Results were used to assess the existing verification process and to review the current margins used for the expansion of clinical target volume (CTV) to planning target volume (PTV). Patients and methods: Patients were immobilized in a GTC frame. Target volume definition was performed on localization CT and MRI scans and a CTV to PTV margin of 5 mm (based on initial experience) was introduced in 3D. A Brown-Roberts-Wells (BRW) fiducial system was used for stereotactic coordinate definition. The existing verification process consisted of an intercomparison of the coordinates of the isocentres and anatomy between the localization and verification CT scans. Treatment was delivered with 6 MV photons using four fixed non-coplanar conformal fields using a multi-leaf collimator. Portal imaging verification consisted of the acquisition of orthogonal images centred through the treatment isocentre. Digitally reconstructed radiographs (DRRs) created from the CT localization scans were used as reference images. Semi-automated matching software was used to quantify set up deviations (displacements and rotations) between reference and portal images. Results: One hundred and twenty six anterior and 123 lateral portal images were available for analysis for set up deviations. For displacements, the total errors in the cranial/caudal direction were shown to have the largest SD's of 1.2 mm, while systematic and random errors reached SD's of 1.0 and 0.7 mm, respectively, in the cranial/caudal direction. The corresponding data for rotational errors (the largest deviation was found in the sagittal plane) was 0.7 deg. SD (total error), 0.5 deg. (systematic) and 0

  6. High-Fidelity Solar Power Income Modeling for Solar-Electric UAVs: Development and Flight Test Based Verification

    OpenAIRE

    Oettershagen, Philipp

    2017-01-01

    Solar power models are a crucial element of solar-powered UAV design and performance analysis. During the conceptual design phase, their accuracy directly relates to the accuracy of the predicted performance metrics and thus the final design characteristics of the solar-powered UAV. Likewise, during the operations phase of a solar-powered UAV accurate solar power income models are required to predict and assess the solar power system performance. However, the existing literature on solar-powe...

  7. WebPrInSeS: automated full-length clone sequence identification and verification using high-throughput sequencing data.

    Science.gov (United States)

    Massouras, Andreas; Decouttere, Frederik; Hens, Korneel; Deplancke, Bart

    2010-07-01

    High-throughput sequencing (HTS) is revolutionizing our ability to obtain cheap, fast and reliable sequence information. Many experimental approaches are expected to benefit from the incorporation of such sequencing features in their pipeline. Consequently, software tools that facilitate such an incorporation should be of great interest. In this context, we developed WebPrInSeS, a web server tool allowing automated full-length clone sequence identification and verification using HTS data. WebPrInSeS encompasses two separate software applications. The first is WebPrInSeS-C which performs automated sequence verification of user-defined open-reading frame (ORF) clone libraries. The second is WebPrInSeS-E, which identifies positive hits in cDNA or ORF-based library screening experiments such as yeast one- or two-hybrid assays. Both tools perform de novo assembly using HTS data from any of the three major sequencing platforms. Thus, WebPrInSeS provides a highly integrated, cost-effective and efficient way to sequence-verify or identify clones of interest. WebPrInSeS is available at http://webprinses.epfl.ch/ and is open to all users.

  8. Gated viewing and high-accuracy three-dimensional laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    , a high PRF of 32 kHz, and a high-speed camera with gate times down to 200 ps and delay steps down to 100 ps. The electronics and the software also allow for gated viewing with automatic gain control versus range, whereby foreground backscatter can be suppressed. We describe our technique for the rapid...

  9. Assessment of Automated Measurement and Verification (M&V) Methods

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Touzani, Samir [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Custodio, Claudine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fernandes, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jump, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-07-01

    This report documents the application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-building energy savings.

  10. High-accuracy measurement of ship velocities by DGPS; DGPS ni yoru sensoku keisoku no koseidoka ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, S; Koterayama, W [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1996-04-10

    The differential global positioning system (DGPS) can eliminate most of errors in ship velocity measurement by GPS positioning alone. Through two rounds of marine observations by towing an observation robot in summer 1995, the authors attempted high-accuracy measurement of ship velocities by DGPS, and also carried out both positioning by GPS alone and measurement using the bottom track of ADCP (acoustic Doppler current profiler). In this paper, the results obtained by these measurement methods were examined through comparison among them, and the accuracy of the measured ship velocities was considered. In DGPS measurement, both translocation method and interference positioning method were used. ADCP mounted on the observation robot allowed measurement of the velocity of current meter itself by its bottom track in shallow sea areas less than 350m. As the result of these marine observations, it was confirmed that the accuracy equivalent to that of direct measurement by bottom track is possible to be obtained by DGPS. 3 refs., 5 figs., 1 tab.

  11. HTGR analytical methods and design verification

    International Nuclear Information System (INIS)

    Neylan, A.J.; Northup, T.E.

    1982-05-01

    Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier

  12. High construal level can help negotiators to reach integrative agreements: The role of information exchange and judgement accuracy.

    Science.gov (United States)

    Wening, Stefanie; Keith, Nina; Abele, Andrea E

    2016-06-01

    In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.

  13. Neutrino mass from cosmology: impact of high-accuracy measurement of the Hubble constant

    Energy Technology Data Exchange (ETDEWEB)

    Sekiguchi, Toyokazu [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Ichikawa, Kazuhide [Department of Micro Engineering, Kyoto University, Kyoto 606-8501 (Japan); Takahashi, Tomo [Department of Physics, Saga University, Saga 840-8502 (Japan); Greenhill, Lincoln, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: kazuhide@me.kyoto-u.ac.jp, E-mail: tomot@cc.saga-u.ac.jp, E-mail: greenhill@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2010-03-01

    Non-zero neutrino mass would affect the evolution of the Universe in observable ways, and a strong constraint on the mass can be achieved using combinations of cosmological data sets. We focus on the power spectrum of cosmic microwave background (CMB) anisotropies, the Hubble constant H{sub 0}, and the length scale for baryon acoustic oscillations (BAO) to investigate the constraint on the neutrino mass, m{sub ν}. We analyze data from multiple existing CMB studies (WMAP5, ACBAR, CBI, BOOMERANG, and QUAD), recent measurement of H{sub 0} (SHOES), with about two times lower uncertainty (5 %) than previous estimates, and recent treatments of BAO from the Sloan Digital Sky Survey (SDSS). We obtained an upper limit of m{sub ν} < 0.2eV (95 % C.L.), for a flat ΛCDM model. This is a 40 % reduction in the limit derived from previous H{sub 0} estimates and one-third lower than can be achieved with extant CMB and BAO data. We also analyze the impact of smaller uncertainty on measurements of H{sub 0} as may be anticipated in the near term, in combination with CMB data from the Planck mission, and BAO data from the SDSS/BOSS program. We demonstrate the possibility of a 5σ detection for a fiducial neutrino mass of 0.1 eV or a 95 % upper limit of 0.04 eV for a fiducial of m{sub ν} = 0 eV. These constraints are about 50 % better than those achieved without external constraint. We further investigate the impact on modeling where the dark-energy equation of state is constant but not necessarily -1, or where a non-flat universe is allowed. In these cases, the next-generation accuracies of Planck, BOSS, and 1 % measurement of H{sub 0} would all be required to obtain the limit m{sub ν} < 0.05−0.06 eV (95 % C.L.) for the fiducial of m{sub ν} = 0 eV. The independence of systematics argues for pursuit of both BAO and H{sub 0} measurements.

  14. Challenges in high accuracy surface replication for micro optics and micro fluidics manufacture

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo

    2014-01-01

    Patterning the surface of polymer components with microstructured geometries is employed in optical and microfluidic applications. Mass fabrication of polymer micro structured products is enabled by replication technologies such as injection moulding. Micro structured tools are also produced...... by replication technologies such as nickel electroplating. All replication steps are enabled by a high precision master and high reproduction fidelity to ensure that the functionalities associated with the design are transferred to the final component. Engineered surface micro structures can be either...

  15. A content analysis of the quantity and accuracy of dietary supplement information found in magazines with high adolescent readership.

    Science.gov (United States)

    Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth

    2009-02-01

    The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.

  16. Interethnic differences in the accuracy of anthropometric indicators of obesity in screening for high risk of coronary heart disease

    Science.gov (United States)

    Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE

    2009-01-01

    Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159

  17. High accuracy and precision micro injection moulding of thermoplastic elastomers micro ring production

    DEFF Research Database (Denmark)

    Calaon, Matteo; Tosello, Guido; Elsborg, René

    2016-01-01

    The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance...

  18. High Accuracy Three-dimensional Simulation of Micro Injection Moulded Parts

    DEFF Research Database (Denmark)

    Tosello, Guido; Costa, F. S.; Hansen, Hans Nørgaard

    2011-01-01

    Micro injection moulding (μIM) is the key replication technology for high precision manufacturing of polymer micro products. Data analysis and simulations on micro-moulding experiments have been conducted during the present validation study. Detailed information about the μIM process was gathered...

  19. Application of virtual distances methodology to laser tracker verification with an indexed metrology platform

    International Nuclear Information System (INIS)

    Acero, R; Pueo, M; Santolaria, J; Aguilar, J J; Brau, A

    2015-01-01

    High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures. (paper)

  20. Application of virtual distances methodology to laser tracker verification with an indexed metrology platform

    Science.gov (United States)

    Acero, R.; Santolaria, J.; Pueo, M.; Aguilar, J. J.; Brau, A.

    2015-11-01

    High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures.

  1. Toward a 3D transrectal ultrasound system for verification of needle placement during high-dose-rate interstitial gynecologic brachytherapy.

    Science.gov (United States)

    Rodgers, Jessica Robin; Surry, Kathleen; Leung, Eric; D'Souza, David; Fenster, Aaron

    2017-05-01

    Treatment for gynecologic cancers, such as cervical, recurrent endometrial, and vaginal malignancies, commonly includes external-beam radiation and brachytherapy. In high-dose-rate (HDR) interstitial gynecologic brachytherapy, radiation treatment is delivered via hollow needles that are typically inserted through a template on the perineum with a cylinder placed in the vagina for stability. Despite the need for precise needle placement to minimize complications and provide optimal treatment, there is no standard intra-operative image-guidance for this procedure. While some image-guidance techniques have been proposed, including magnetic resonance (MR) imaging, X-ray computed tomography (CT), and two-dimensional (2D) transrectal ultrasound (TRUS), these techniques have not been widely adopted. In order to provide intra-operative needle visualization and localization during interstitial brachytherapy, we have developed a three-dimensional (3D) TRUS system. This study describes the 3D TRUS system and reports on the system validation and results from a proof-of-concept patient study. To obtain a 3D TRUS image, the system rotates a conventional 2D endocavity transducer through 170 degrees in 12 s, reconstructing the 2D frames into a 3D image in real-time. The geometry of the reconstruction was validated using two geometric phantoms to ensure the accuracy of the linear measurements in each of the image coordinate directions and the volumetric accuracy of the system. An agar phantom including vaginal and rectal canals, as well as a model uterus and tumor, was designed and used to test the visualization and localization of the interstitial needles under idealized conditions by comparing the needles' positions between the 3D TRUS scan and a registered MR image. Five patients undergoing HDR interstitial gynecologic brachytherapy were imaged using the 3D TRUS system following the insertion of all needles. This image was manually, rigidly registered to the clinical

  2. Museum genomics: low-cost and high-accuracy genetic data from historical specimens.

    Science.gov (United States)

    Rowe, Kevin C; Singhal, Sonal; Macmanes, Matthew D; Ayroles, Julien F; Morelli, Toni Lyn; Rubidge, Emily M; Bi, Ke; Moritz, Craig C

    2011-11-01

    Natural history collections are unparalleled repositories of geographical and temporal variation in faunal conditions. Molecular studies offer an opportunity to uncover much of this variation; however, genetic studies of historical museum specimens typically rely on extracting highly degraded and chemically modified DNA samples from skins, skulls or other dried samples. Despite this limitation, obtaining short fragments of DNA sequences using traditional PCR amplification of DNA has been the primary method for genetic study of historical specimens. Few laboratories have succeeded in obtaining genome-scale sequences from historical specimens and then only with considerable effort and cost. Here, we describe a low-cost approach using high-throughput next-generation sequencing to obtain reliable genome-scale sequence data from a traditionally preserved mammal skin and skull using a simple extraction protocol. We show that single-nucleotide polymorphisms (SNPs) from the genome sequences obtained independently from the skin and from the skull are highly repeatable compared to a reference genome. © 2011 Blackwell Publishing Ltd.

  3. Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system

    Science.gov (United States)

    Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.

    2017-11-01

    Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.

  4. High-accuracy alignment based on atmospherical dispersion - technological approaches and solutions for the dual-wavelength transmitter

    International Nuclear Information System (INIS)

    Burkhard, Boeckem

    1999-01-01

    In the course of the progressive developments of sophisticated geodetic systems utilizing electromagnetic waves in the visible or near IR-range a more detailed knowledge of the propagation medium and coevally solutions of atmospherically induced limitations will become important. An alignment system based on atmospherical dispersion, called a dispersometer, is a metrological solution to the atmospherically induced limitations, in optical alignment and direction observations of high accuracy. In the dispersometer we are using the dual-wavelength method for dispersive air to obtain refraction compensated angle measurements, the detrimental impact of atmospheric turbulence notwithstanding. The principle of the dual-wavelength method utilizes atmospherical dispersion, i.e. the wavelength dependence of the refractive index. The difference angle between two light beams of different wavelengths, which is called the dispersion angle Δβ, is to first approximation proportional to the refraction angle: β IR ν(β blue - β IR ) = ν Δβ, this equation implies that the dispersion angle has to be measured at least 42 times more accurate than the desired accuracy of the refraction angle for the wavelengths used in the present dispersometer. This required accuracy constitutes one major difficulty for the instrumental performance in applying the dispersion effect. However, the dual-wavelength method can only be successfully used in an optimized transmitter-receiver combination. Beyond the above mentioned resolution requirement for the detector, major difficulties in instrumental realization arise in the availability of a suitable dual-wavelength laser light source, laser light modulation with a very high extinction ratio and coaxial emittance of mono-mode radiation at both wavelengths. Therefore, this paper focuses on the solutions of the dual-wavelength transmitter introducing a new hardware approach and a complete re-design of the in [1] proposed conception of the dual

  5. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  6. Modelling and Control of Stepper Motors for High Accuracy Positioning Systems Used in Radioactive Environments

    CERN Document Server

    Picatoste Ruilope, Ricardo; Masi, Alessandro

    Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatl...

  7. The study of optimization on process parameters of high-accuracy computerized numerical control polishing

    Science.gov (United States)

    Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu

    2017-09-01

    Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.

  8. High accuracy injection circuit for the calibration of a large pixel sensor matrix

    International Nuclear Information System (INIS)

    Quartieri, E.; Comotti, D.; Manghisoni, M.

    2013-01-01

    Semiconductor pixel detectors, for particle tracking and vertexing in high energy physics experiments as well as for X-ray imaging, in particular for synchrotron light sources and XFELs, require a large area sensor matrix. This work will discuss the design and the characterization of a high-linearity, low dispersion injection circuit to be used for pixel-level calibration of detector readout electronics in a large pixel sensor matrix. The circuit provides a useful tool for the characterization of the readout electronics of the pixel cell unit for both monolithic active pixel sensors and hybrid pixel detectors. In the latter case, the circuit allows for precise analogue test of the readout channel already at the chip level, when no sensor is connected. Moreover, it provides a simple means for calibration of readout electronics once the detector has been connected to the chip. Two injection techniques can be provided by the circuit: one for a charge sensitive amplification and the other for a transresistance readout channel. The aim of the paper is to describe the architecture and the design guidelines of the calibration circuit, which has been implemented in a 130 nm CMOS technology. Moreover, experimental results of the proposed injection circuit will be presented in terms of linearity and dispersion

  9. Accuracy of an automated system for tuberculosis detection on chest radiographs in high-risk screening.

    Science.gov (United States)

    Melendez, J; Hogeweg, L; Sánchez, C I; Philipsen, R H H M; Aldridge, R W; Hayward, A C; Abubakar, I; van Ginneken, B; Story, A

    2018-05-01

    Tuberculosis (TB) screening programmes can be optimised by reducing the number of chest radiographs (CXRs) requiring interpretation by human experts. To evaluate the performance of computerised detection software in triaging CXRs in a high-throughput digital mobile TB screening programme. A retrospective evaluation of the software was performed on a database of 38 961 postero-anterior CXRs from unique individuals seen between 2005 and 2010, 87 of whom were diagnosed with TB. The software generated a TB likelihood score for each CXR. This score was compared with a reference standard for notified active pulmonary TB using receiver operating characteristic (ROC) curve and localisation ROC (LROC) curve analyses. On ROC curve analysis, software specificity was 55.71% (95%CI 55.21-56.20) and negative predictive value was 99.98% (95%CI 99.95-99.99), at a sensitivity of 95%. The area under the ROC curve was 0.90 (95%CI 0.86-0.93). Results of the LROC curve analysis were similar. The software could identify more than half of the normal images in a TB screening setting while maintaining high sensitivity, and may therefore be used for triage.

  10. A three axis turntable's online initial state measurement method based on the high-accuracy laser gyro SINS

    Science.gov (United States)

    Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu

    2016-10-01

    As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.

  11. High-accuracy measurement and compensation of grating line-density error in a tiled-grating compressor

    Science.gov (United States)

    Zhao, Dan; Wang, Xiao; Mu, Jie; Li, Zhilin; Zuo, Yanlei; Zhou, Song; Zhou, Kainan; Zeng, Xiaoming; Su, Jingqin; Zhu, Qihua

    2017-02-01

    The grating tiling technology is one of the most effective means to increase the aperture of the gratings. The line-density error (LDE) between sub-gratings will degrade the performance of the tiling gratings, high accuracy measurement and compensation of the LDE are of significance to improve the output pulses characteristics of the tiled-grating compressor. In this paper, the influence of LDE on the output pulses of the tiled-grating compressor is quantitatively analyzed by means of numerical simulation, the output beams drift and output pulses broadening resulting from the LDE are presented. Based on the numerical results we propose a compensation method to reduce the degradations of the tiled grating compressor by applying angular tilt error and longitudinal piston error at the same time. Moreover, a monitoring system is setup to measure the LDE between sub-gratings accurately and the dispersion variation due to the LDE is also demonstrated based on spatial-spectral interference. In this way, we can realize high-accuracy measurement and compensation of the LDE, and this would provide an efficient way to guide the adjustment of the tiling gratings.

  12. High-precision prostate cancer irradiation by clinical application of an offline patient setup verification procedure, using portal imaging

    NARCIS (Netherlands)

    Bel, A.; Vos, P. H.; Rodrigus, P. T.; Creutzberg, C. L.; Visser, A. G.; Stroom, J. C.; Lebesque, J. V.

    1996-01-01

    PURPOSE: To investigate in three institutions, The Netherlands Cancer Institute (Antoni van Leeuwenhoek Huis [AvL]), Dr. Daniel den Hoed Cancer Center (DDHC), and Dr, Bernard Verbeeten Institute (BVI), how much the patient setup accuracy for irradiation of prostate cancer can be improved by an

  13. High-precision prostate cancer irradiation by clinical application of an offline patient setup verification procedure, using portal imaging

    NARCIS (Netherlands)

    A. Bel (Arjan); P.H. Vos (Pieter); P. Rodrigus (Patrick); C.L. Creutzberg (Carien); A.G. Visser (Andries); J.Ch. Stroom (Joep); J.V. Lebesque (Joos)

    1996-01-01

    textabstractPurpose: To investigate in three institutions, The Netherlands Cancer Institute (Antoni van Leeuwenhoek Huis [AvL]), Dr. Daniel den Hoed Cancer Center (DDHC), and Dr. Bernard Verbeeten Institute (BVI), how much the patient setup accuracy for irradiation of prostate cancer can be improved

  14. High accuracy velocity control method for the french moving-coil watt balance

    International Nuclear Information System (INIS)

    Topcu, Suat; Chassagne, Luc; Haddad, Darine; Alayli, Yasser; Juncar, Patrick

    2004-01-01

    We describe a novel method of velocity control dedicated to the French moving-coil watt balance. In this project, a coil has to move in a magnetic field at a velocity of 2 mm s -1 with a relative uncertainty of 10 -9 over 60 mm. Our method is based on the use of both a heterodyne Michelson's interferometer, a two-level translation stage, and a homemade high frequency phase-shifting electronic circuit. To quantify the stability of the velocity, the output of the interferometer is sent into a frequency counter and the Doppler frequency shift is recorded. The Allan standard deviation has been used to calculate the stability and a σ y (τ) of about 2.2x10 -9 over 400 s has been obtained

  15. Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals

    Science.gov (United States)

    Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei

    2018-01-01

    Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.

  16. A high-accuracy image registration algorithm using phase-only correlation for dental radiographs

    International Nuclear Information System (INIS)

    Ito, Koichi; Nikaido, Akira; Aoki, Takafumi; Kosuge, Eiko; Kawamata, Ryota; Kashima, Isamu

    2008-01-01

    Dental radiographs have been used for the accurate assessment and treatment of dental diseases. The nonlinear deformation between two dental radiographs may be observed, even if they are taken from the same oral regions of the subject. For an accurate diagnosis, the complete geometric registration between radiographs is required. This paper presents an efficient dental radiograph registration algorithm using Phase-Only Correlation (POC) function. The use of phase components in 2D (two-dimensional) discrete Fourier transforms of dental radiograph images makes possible to achieve highly robust image registration and recognition. Experimental evaluation using a dental radiograph database indicates that the proposed algorithm exhibits efficient recognition performance even for distorted radiographs. (author)

  17. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption

    Science.gov (United States)

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  18. Accuracy of W' Recovery Kinetics in High Performance Cyclists - Modelling Intermittent Work Capacity.

    Science.gov (United States)

    Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I

    2017-10-16

    With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τ W' ) may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τ W ' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τ W ' values were compared to equivalent SKIBA 2 τ W ' values to assess the difference in recovery rates for this population. Plotting modified τ W ' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τ W' values against the SKIBA 2 τ W' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τ W' to DCP relationship was best described by a power function: τ W' =2287.2∗D CP -0.688 (R 2 = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τ W' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τ W' relationships may be necessary in order to maximise the model's validity.

  19. Accuracy of Administrative Codes for Distinguishing Positive Pressure Ventilation from High-Flow Nasal Cannula.

    Science.gov (United States)

    Good, Ryan J; Leroue, Matthew K; Czaja, Angela S

    2018-06-07

    Noninvasive positive pressure ventilation (NIPPV) is increasingly used in critically ill pediatric patients, despite limited data on safety and efficacy. Administrative data may be a good resource for observational studies. Therefore, we sought to assess the performance of the International Classification of Diseases, Ninth Revision procedure code for NIPPV. Patients admitted to the PICU requiring NIPPV or heated high-flow nasal cannula (HHFNC) over the 11-month study period were identified from the Virtual PICU System database. The gold standard was manual review of the electronic health record to verify the use of NIPPV or HHFNC among the cohort. The presence or absence of a NIPPV procedure code was determined by using administrative data. Test characteristics with 95% confidence intervals (CIs) were generated, comparing administrative data with the gold standard. Among the cohort ( n = 562), the majority were younger than 5 years, and the most common primary diagnosis was bronchiolitis. Most (82%) required NIPPV, whereas 18% required only HHFNC. The NIPPV code had a sensitivity of 91.1% (95% CI: 88.2%-93.6%) and a specificity of 57.6% (95% CI: 47.2%-67.5%), with a positive likelihood ratio of 2.15 (95% CI: 1.70-2.71) and negative likelihood ratio of 0.15 (95% CI: 0.11-0.22). Among our critically ill pediatric cohort, NIPPV procedure codes had high sensitivity but only moderate specificity. On the basis of our study results, there is a risk of misclassification, specifically failure to identify children who require NIPPV, when using administrative data to study the use of NIPPV in this population. Copyright © 2018 by the American Academy of Pediatrics.

  20. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  1. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    Science.gov (United States)

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database

  2. Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies

    International Nuclear Information System (INIS)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer; Humboldt-Universitaet, Berlin

    2016-04-01

    We discuss the determination of the strong coupling α_M_S(m_Z) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α_s(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α_s=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α_s∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  3. Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies

    Energy Technology Data Exchange (ETDEWEB)

    Dalla Brida, Mattia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Fritzsch, Patrick [Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica UAM/CSIC; Korzec, Tomasz [Wuppertal Univ. (Germany). Dept. of Physics; Ramos, Alberto [CERN - European Organization for Nuclear Research, Geneva (Switzerland). Theory Div.; Sint, Stefan [Trinity College Dublin (Ireland). School of Mathematics; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Collaboration: ALPHA Collaboration

    2016-04-15

    We discuss the determination of the strong coupling α{sub MS}(m{sub Z}) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α{sub s}(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α{sub s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α{sub s}∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.

  4. High-accuracy X-ray detector calibration based on cryogenic radiometry

    Science.gov (United States)

    Krumrey, M.; Cibik, L.; Müller, P.

    2010-06-01

    Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.

  5. High-accuracy X-ray detector calibration based on cryogenic radiometry

    International Nuclear Information System (INIS)

    Krumrey, M.; Cibik, L.; Mueller, P.

    2010-01-01

    Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.

  6. High-accuracy local positioning network for the alignment of the Mu2e experiment.

    Energy Technology Data Exchange (ETDEWEB)

    Hejdukova, Jana B. [Czech Technical Univ., Prague (Czech Republic)

    2017-06-01

    This Diploma thesis describes the establishment of a high-precision local positioning network and accelerator alignment for the Mu2e physics experiment. The process of establishing new network consists of few steps: design of the network, pre-analysis, installation works, measurements of the network and making adjustments. Adjustments were performed using two approaches. First is a geodetic approach of taking into account the Earth’s curvature and the metrological approach of a pure 3D Cartesian system on the other side. The comparison of those two approaches is performed and evaluated in the results and compared with expected differences. The effect of the Earth’s curvature was found to be significant for this kind of network and should not be neglected. The measurements were obtained with Absolute Tracker AT401, leveling instrument Leica DNA03 and gyrotheodolite DMT Gyromat 2000. The coordinates of the points of the reference network were determined by the Least Square Meth od and the overall view is attached as Annexes.

  7. Diagnostic accuracy for X-ray chest in interstitial lung disease as confirmed by high resolution computed tomography (HRCT) chest

    International Nuclear Information System (INIS)

    Afzal, F.; Raza, S.; Shafique, M.

    2017-01-01

    Objective: To determine the diagnostic accuracy of x-ray chest in interstitial lung disease as confirmed by high resolution computed tomography (HRCT) chest. Study Design: A cross-sectional validational study. Place and Duration of Study: Department of Diagnostic Radiology, Combined Military Hospital Rawalpindi, from Oct 2013 to Apr 2014. Material and Method: A total of 137 patients with clinical suspicion of interstitial lung disease (ILD) aged 20-50 years of both genders were included in the study. Patients with h/o previous histopathological diagnosis, already taking treatment and pregnant females were excluded. All the patients had chest x-ray and then HRCT. The x-ray and HRCT findings were recorded as presence or absence of the ILD. Results: Mean age was 40.21 ± 4.29 years. Out of 137 patients, 79 (57.66 percent) were males and 58 (42.34 percent) were females with male to female ratio of 1.36:1. Chest x-ray detected ILD in 80 (58.39 percent) patients, out of which, 72 (true positive) had ILD and 8 (false positive) had no ILD on HRCT. Overall sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of chest x-ray in diagnosing ILD was 80.0 percent, 82.98 percent, 90.0 percent, 68.42 percent and 81.02 percent respectively. Conclusion: This study concluded that chest x-ray is simple, non-invasive, economical and readily available alternative to HRCT with an acceptable diagnostic accuracy of 81 percent in the diagnosis of ILD. (author)

  8. Social power and recognition of emotional prosody: High power is associated with lower recognition accuracy than low power.

    Science.gov (United States)

    Uskul, Ayse K; Paulmann, Silke; Weick, Mario

    2016-02-01

    Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).

  9. High-accuracy detection of early Parkinson's Disease using multiple characteristics of finger movement while typing.

    Directory of Open Access Journals (Sweden)

    Warwick R Adams

    Full Text Available Parkinson's Disease (PD is a progressive neurodegenerative movement disease affecting over 6 million people worldwide. Loss of dopamine-producing neurons results in a range of both motor and non-motor symptoms, however there is currently no definitive test for PD by non-specialist clinicians, especially in the early disease stages where the symptoms may be subtle and poorly characterised. This results in a high misdiagnosis rate (up to 25% by non-specialists and people can have the disease for many years before diagnosis. There is a need for a more accurate, objective means of early detection, ideally one which can be used by individuals in their home setting. In this investigation, keystroke timing information from 103 subjects (comprising 32 with mild PD severity and the remainder non-PD controls was captured as they typed on a computer keyboard over an extended period and showed that PD affects various characteristics of hand and finger movement and that these can be detected. A novel methodology was used to classify the subjects' disease status, by utilising a combination of many keystroke features which were analysed by an ensemble of machine learning classification models. When applied to two separate participant groups, this approach was able to successfully discriminate between early-PD subjects and controls with 96% sensitivity, 97% specificity and an AUC of 0.98. The technique does not require any specialised equipment or medical supervision, and does not rely on the experience and skill of the practitioner. Regarding more general application, it currently does not incorporate a second cardinal disease symptom, so may not differentiate PD from similar movement-related disorders.

  10. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy

    Science.gov (United States)

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-01

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  11. A diabetes dashboard and physician efficiency and accuracy in accessing data needed for high-quality diabetes care.

    Science.gov (United States)

    Koopman, Richelle J; Kochendorfer, Karl M; Moore, Joi L; Mehr, David R; Wakefield, Douglas S; Yadamsuren, Borchuluun; Coberly, Jared S; Kruse, Robin L; Wakefield, Bonnie J; Belden, Jeffery L

    2011-01-01

    We compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care. We performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and "think-aloud" interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology. The mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P dashboard (P dashboard (P dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies.

  12. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    Science.gov (United States)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  13. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  14. Assessment of high precision, high accuracy Inductively Coupled Plasma-Optical Emission Spectroscopy to obtain concentration uncertainties less than 0.2% with variable matrix concentrations

    International Nuclear Information System (INIS)

    Rabb, Savelas A.; Olesik, John W.

    2008-01-01

    The ability to obtain high precision, high accuracy measurements in samples with complex matrices using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy (HP-ICP-OES) was investigated. The Common Analyte Internal Standard (CAIS) procedure was incorporated into the High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method to correct for matrix-induced changes in emission intensity ratios. Matrix matching and standard addition approaches to minimize matrix-induced errors when using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy were also assessed. The High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method was tested with synthetic solutions in a variety of matrices, alloy standard reference materials and geological reference materials

  15. Verification of Vitrified High-Activity Waste Stored in a CASTOR HAW 20/28 CG Cask by Simulated Baseline Comparison

    International Nuclear Information System (INIS)

    Shephard, A.; Arenas-Carrasco, J.; Dratschmidt, H.; De-Baere, P.; Af Ekenstam, G.; Lebrun, A.

    2010-01-01

    The verification process for the vitrification of high-activity waste (HAW) focuses on maintaining the continuity-of-knowledge of special nuclear material (SNM) as it traverses a vitrification facility. However, the inaccessible nature of a vitrification facility presents an obstacle to the deployment of conventional safeguards, albeit the process area of a vitrification facility is effectively a hot cell. The employment of remotely operated NDA hardware/DA sample equipment inside the process area would be problematic-at-best and the alternative of continuous monitoring would draw heavily on the critical resource of inspector time. In response to the aforementioned constraints, the IAEA and Euratom opted to develop a new method which focuses on the verification of SNM after the vitrified HAW has been sealed in storage casks. The new method verifies the presence of the vitrified HAW through the comparison of total neutron count rates collected at points around a cask with those predicted by Monte Carlo simulation. The model includes a dual N50 neutron slab detector (custom design by Euratom) and a CASTOR HAW 20/28 CG storage cask configured with the operator declared contents. By comparison of the simulated neutron emission pattern and field measurements, the displacement of Pu and U is evident from a detectable neutron signal defect. Because the spontaneous fission of 244 Cm is the dominant neutron source in vitrified HAW, the 244 Cm/Pu and 244 Cm/U mass ratios must be known in order to relate the neutron signal outside the cask to the amounts of Pu and U stored inside. These mass ratios can be determined from HAW samples collected by the inspectorates from the accountability tanks and analyzed by DA. The absence of separation of SNM from the HAW is verified by other measures. To ensure the validity of the simulation, sources of uncertainty were systematically addressed and quantified. This new verification method effectively removes the need for NDA equipment

  16. Simplifying and expanding analytical capabilities for various classes of doping agents by means of direct urine injection high performance liquid chromatography high resolution/high accuracy mass spectrometry.

    Science.gov (United States)

    Görgens, Christian; Guddat, Sven; Thomas, Andreas; Wachsmuth, Philipp; Orlovius, Anne-Katrin; Sigmund, Gerd; Thevis, Mario; Schänzer, Wilhelm

    2016-11-30

    So far, in sports drug testing compounds of different classes are processed and measured using different screening procedures. The constantly increasing number of samples in doping analysis, as well as the large number of substances with doping related, pharmacological effects require the development of even more powerful assays than those already employed in sports drug testing, indispensably with reduced sample preparation procedures. The analysis of native urine samples after direct injection provides a promising analytical approach, which thereby possesses a broad applicability to many different compounds and their metabolites, without a time-consuming sample preparation. In this study, a novel multi-target approach based on liquid chromatography and high resolution/high accuracy mass spectrometry is presented to screen for more than 200 analytes of various classes of doping agents far below the required detection limits in sports drug testing. Here, classic groups of drugs as diuretics, stimulants, β 2 -agonists, narcotics and anabolic androgenic steroids as well as various newer target compounds like hypoxia-inducible factor (HIF) stabilizers, selective androgen receptor modulators (SARMs), selective estrogen receptor modulators (SERMs), plasma volume expanders and other doping related compounds, listed in the 2016 WADA prohibited list were implemented. As a main achievement, growth hormone releasing peptides could be implemented, which chemically belong to the group of small peptides (0.99), limit of detection (0.1-25ng/mL; 3'OH-stanozolol glucuronide: 50pg/mL; dextran/HES: 10μg/mL) and matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Physics Verification Overview

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-12

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  18. The linear interplay of intrinsic and extrinsic noises ensures a high accuracy of cell fate selection in budding yeast

    Science.gov (United States)

    Li, Yongkai; Yi, Ming; Zou, Xiufen

    2014-01-01

    To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292

  19. A direct indication of plasma potential diagnostic with fast time response and high accuracy based on a differential emissive probe

    International Nuclear Information System (INIS)

    Yao, W.E.; Hershkowitz; Intrator, T.

    1985-01-01

    The floating potential of the emissive probe has been used to directly measure the plasma potential. The authors have recently presented another method for directly indicating the plasma potential with a differential emissive probe. In this paper they describe the effects of probe size, plasma density and plasma potential fluctuation on plasma potential measurements and give methods for reducing errors. A control system with fast time response (α 20 μs) and high accuracy (the order of the probe temperature T/sub w//e) for maintaining a differential emissive probe at plasma potential has been developed. It can be operated in pulsed discharge plasma to measure plasma potential dynamic characteristics. A solid state optical coupler is employed to improve circuit performance. This system was tested experimentally by measuring the plasma potential in an argon plasma device an on the Phaedrus tandem mirror

  20. A high-accuracy extraction of the isoscalar πN scattering length from pionic deuterium data

    International Nuclear Information System (INIS)

    Phillips, Daniel R.; Baru, Vadim; Hanhart, Christoph; Nogga, Andreas; Hoferichter, Martin; Kubis, Bastian

    2010-01-01

    We present a high-accuracy calculation of the π(bar sign)d scattering length using chiral perturbation theory up to order (M π /m p ) 7/2 . For the first time isospin-violating corrections are included consistently. The resulting value of a π -bar d has a theoretical uncertainty of a few percent. We use it, together with data on pionic deuterium and pionic hydrogen atoms, to extract the isoscalar and isovector pion-nucleon scattering lengths from a combined analysis, and obtain a + (7.9±3.2)·10 -3 M π -1 and a-bar (86.3±1.0)·10 -3 M π -1 .

  1. A direct indication of plasma potential diagnostic with fast time response and high accuracy based on a differential emissive probe

    International Nuclear Information System (INIS)

    Yao, W.E.; Hershkowitz, N.; Intrator, T.

    1985-01-01

    The floating potential of the emissive probe has been used to directly measure the plasma potential. The authors have recently presented another method for directly indicating the plasma potential with a differential emissive probe. In this paper they describe the effects of probe size, plasma density and plasma potential fluctuation on plasma potential measurements and give methods for reducing errors. A control system with fast time response (≅ 20 μs) and high accuracy (the order of the probe temperature T/sub w//e) for maintaining a differential emissive probe at plasma potential has been developed. It can be operated in pulsed discharge plasma to measure plasma potential dynamic characteristics. A solid state optical coupler is employed to improve circuit performance. This system was tested experimentally by measuring the plasma potential in an argon plasma device and on the Phaedrus tandem mirror

  2. High accuracy determination of trace elements in NIST standard reference materials by isotope dilution ICP-MS

    International Nuclear Information System (INIS)

    Paulsen, P.J.; Beary, E.S.

    1996-01-01

    At NIST (National Institute of Standards and Technology), ICP-MS ID (inductively coupled mass spectrometry isotope dilution) has been used to certify a wide range of elements in a variety of materials with high accuracy. Both the chemical preparation and instrumental procedures are simpler than with other ID mass spectrometric techniques. The ICP-MS has picogram/ml detection limits for most elements using fixed operating parameters. Chemical separations are required only to remove an interference (from molecular ions as well as isobaric atoms), or to pre-concentrate the analyte. For example, chemical separations were required for the analysis of SRM 2711, Montana II Soil, but not for boron in peach leaves, SRM 1547.(3 refs., 3 tabs., 2 figs

  3. Inspector measurement verification activities

    International Nuclear Information System (INIS)

    George, R.S.; Crouch, R.

    e most difficult and complex activity facing a safeguards inspector involves the verification of measurements and the performance of the measurement system. Remeasurement is the key to measurement verification activities. Remeasurerements using the facility's measurement system provide the bulk of the data needed for determining the performance of the measurement system. Remeasurements by reference laboratories are also important for evaluation of the measurement system and determination of systematic errors. The use of these measurement verification activities in conjunction with accepted inventory verification practices provides a better basis for accepting or rejecting an inventory. (U.S.)

  4. Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    Science.gov (United States)

    Pai, Shantaram S.; Riha, David S.

    2013-01-01

    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture

  5. PROJECT-SPECIFIC TYPE A VERIFICATION FOR THE HIGH FLUX BEAM REACTOR UNDERGROUND UTILITIES REMOVAL PHASE 3 TRENCH 5, BROOKHAVEN NATIONAL LABORATORY UPTON, NEW YORK

    International Nuclear Information System (INIS)

    Weaver, P.C.

    2010-01-01

    The Oak Ridge Institute for Science and Education (ORISE) has reviewed the project documentation and data for the High Flux Beam Reactor (HFBR) Underground Utilities removal Phase 3; Trench 5 at Brookhaven National Laboratory (BNL) in Upton, New York. The Brookhaven Survey Group (BSG) has completed removal and performed Final Status Survey (FSS) of the concrete duct from Trench 5 from Building 801 to the Stack. Sample results have been submitted as required to demonstrate that the cleanup goal of (le)15 mrem/yr above background to a resident in 50 years has been met. Four rounds of sampling, from pre-excavation to FSS, were performed as specified in the Field Sampling Plan (FSP) (BNL 2010a). It is the policy of the U.S. Department of Energy (DOE) to perform independent verifications of decontamination and decommissioning activities conducted at DOE facilities. ORISE has been designated as the organization responsible for this task for the HFBR Underground Utilities. ORISE, together with DOE, determined that a Type A verification of Trench 5 was appropriate based on recent verification results from Trenches 2, 3, and 4, and the minimal potential for residual radioactivity in the area. The removal of underground utilities is being performed in three stages to decommission the HFBR facility and support structures. Phase 3 of this project included the removal of at least 200 feet of 36-inch to 42-inch pipe from the west side to the south side of Building 801, and the 14-inch diameter Acid Waste Line that spanned from 801 to the Stack within Trench 5. Based on the pre-excavation sample results of the soil overburden the potential for contamination of the soil surrounding the pipe is minimal (BNL 2010a). ORISE reviewed the BNL FSP and identified comments for consideration (ORISE 2010). BNL prepared a revised FSP that resolved each ORISE comment adequately (BNL 2010a). ORISE referred to the revised HFBR Underground Utilities FSP FSS data to conduct the Type A verification

  6. Snow cover volumes dynamic monitoring during melting season using high topographic accuracy approach for a Lebanese high plateau witness sinkhole

    Science.gov (United States)

    Abou Chakra, Charbel; Somma, Janine; Elali, Taha; Drapeau, Laurent

    2017-04-01

    Climate change and its negative impact on water resource is well described. For countries like Lebanon, undergoing major population's rise and already decreasing precipitations issues, effective water resources management is crucial. Their continuous and systematic monitoring overs long period of time is therefore an important activity to investigate drought risk scenarios for the Lebanese territory. Snow cover on Lebanese mountains is the most important water resources reserve. Consequently, systematic observation of snow cover dynamic plays a major role in order to support hydrologic research with accurate data on snow cover volumes over the melting season. For the last 20 years few studies have been conducted for Lebanese snow cover. They were focusing on estimating the snow cover surface using remote sensing and terrestrial measurement without obtaining accurate maps for the sampled locations. Indeed, estimations of both snow cover area and volumes are difficult due to snow accumulation very high variability and Lebanese mountains chains slopes topographic heterogeneity. Therefore, the snow cover relief measurement in its three-dimensional aspect and its Digital Elevation Model computation is essential to estimate snow cover volume. Despite the need to cover the all lebanese territory, we favored experimental terrestrial topographic site approaches due to high resolution satellite imagery cost, its limited accessibility and its acquisition restrictions. It is also most challenging to modelise snow cover at national scale. We therefore, selected a representative witness sinkhole located at Ouyoun el Siman to undertake systematic and continuous observations based on topographic approach using a total station. After four years of continuous observations, we acknowledged the relation between snow melt rate, date of total melting and neighboring springs discharges. Consequently, we are able to forecast, early in the season, dates of total snowmelt and springs low

  7. High-accuracy vibration sensor based on a Fabry-Perot interferometer with active phase-tracking technology.

    Science.gov (United States)

    Xia, Wei; Li, Chuncheng; Hao, Hui; Wang, Yiping; Ni, Xiaoqi; Guo, Dongmei; Wang, Ming

    2018-02-01

    A novel position-sensitive Fabry-Perot interferometer was constructed with direct phase modulation by a built-in electro-optic modulator. Pure sinusoidal phase modulation of the light was produced, and the first harmonic of the interference signal was extracted to dynamically maintain the interferometer phase to the most sensitive point of the interferogram. Therefore, the minute vibration of the object was coded on the variation of the interference signal and could be directly retrieved by the output voltage of a photodetector. The operating principle and the signal processing method for active feedback control of the interference phase have been demonstrated in detail. The developed vibration sensor was calibrated through a high-precision piezo-electric transducer and tested by a nano-positioning stage under a vibration magnitude of 60 nm and a frequency of 300 Hz. The active phase-tracking method of the system provides high immunity against environmental disturbances. Experimental results show that the proposed interferometer can effectively reconstruct tiny vibration waveforms with subnanometer resolution, paving the way for high-accuracy vibration sensing, especially for micro-electro-mechanical systems/nano-electro-mechanical systems and ultrasonic devices.

  8. High-Accuracy Tidal Flat Digital Elevation Model Construction Using TanDEM-X Science Phase Data

    Science.gov (United States)

    Lee, Seung-Kuk; Ryu, Joo-Hyung

    2017-01-01

    This study explored the feasibility of using TanDEM-X (TDX) interferometric observations of tidal flats for digital elevation model (DEM) construction. Our goal was to generate high-precision DEMs in tidal flat areas, because accurate intertidal zone data are essential for monitoring coastal environment sand erosion processes. To monitor dynamic coastal changes caused by waves, currents, and tides, very accurate DEMs with high spatial resolution are required. The bi- and monostatic modes of the TDX interferometer employed during the TDX science phase provided a great opportunity for highly accurate intertidal DEM construction using radar interferometry with no time lag (bistatic mode) or an approximately 10-s temporal baseline (monostatic mode) between the master and slave synthetic aperture radar image acquisitions. In this study, DEM construction in tidal flat areas was first optimized based on the TDX system parameters used in various TDX modes. We successfully generated intertidal zone DEMs with 57-m spatial resolutions and interferometric height accuracies better than 0.15 m for three representative tidal flats on the west coast of the Korean Peninsula. Finally, we validated these TDX DEMs against real-time kinematic-GPS measurements acquired in two tidal flat areas; the correlation coefficient was 0.97 with a root mean square error of 0.20 m.

  9. Verification of safety critical software

    International Nuclear Information System (INIS)

    Son, Ki Chang; Chun, Chong Son; Lee, Byeong Joo; Lee, Soon Sung; Lee, Byung Chai

    1996-01-01

    To assure quality of safety critical software, software should be developed in accordance with software development procedures and rigorous software verification and validation should be performed. Software verification is the formal act of reviewing, testing of checking, and documenting whether software components comply with the specified requirements for a particular stage of the development phase[1]. New software verification methodology was developed and was applied to the Shutdown System No. 1 and 2 (SDS1,2) for Wolsung 2,3 and 4 nuclear power plants by Korea Atomic Energy Research Institute(KAERI) and Atomic Energy of Canada Limited(AECL) in order to satisfy new regulation requirements of Atomic Energy Control Boars(AECB). Software verification methodology applied to SDS1 for Wolsung 2,3 and 4 project will be described in this paper. Some errors were found by this methodology during the software development for SDS1 and were corrected by software designer. Outputs from Wolsung 2,3 and 4 project have demonstrated that the use of this methodology results in a high quality, cost-effective product. 15 refs., 6 figs. (author)

  10. Heavy water physical verification in power plants

    International Nuclear Information System (INIS)

    Morsy, S.; Schuricht, V.; Beetle, T.; Szabo, E.

    1986-01-01

    This paper is a report on the Agency experience in verifying heavy water inventories in power plants. The safeguards objectives and goals for such activities are defined in the paper. The heavy water is stratified according to the flow within the power plant, including upgraders. A safeguards scheme based on a combination of records auditing, comparing records and reports, and physical verification has been developed. This scheme has elevated the status of heavy water safeguards to a level comparable to nuclear material safeguards in bulk facilities. It leads to attribute and variable verification of the heavy water inventory in the different system components and in the store. The verification methods include volume and weight determination, sampling and analysis, non-destructive assay (NDA), and criticality check. The analysis of the different measurement methods and their limits of accuracy are discussed in the paper

  11. THE IMPACT OF MODERATE AND HIGH INTENSITY TOTAL BODY FATIGUE ON PASSING ACCURACY IN EXPERT AND NOVICE BASKETBALL PLAYERS

    Directory of Open Access Journals (Sweden)

    Mark Lyons

    2006-06-01

    Full Text Available Despite the acknowledged importance of fatigue on performance in sport, ecologically sound studies investigating fatigue and its effects on sport-specific skills are surprisingly rare. The aim of this study was to investigate the effect of moderate and high intensity total body fatigue on passing accuracy in expert and novice basketball players. Ten novice basketball players (age: 23.30 ± 1.05 yrs and ten expert basketball players (age: 22.50 ± 0.41 yrs volunteered to participate in the study. Both groups performed the modified AAHPERD Basketball Passing Test under three different testing conditions: rest, moderate intensity and high intensity total body fatigue. Fatigue intensity was established using a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant (F 2,36 = 5.252, p = 0.01 level of fatigue by level of skill interaction. On examination of the mean scores it is clear that following high intensity total body fatigue there is a significant detriment in the passing performance of both novice and expert basketball players when compared to their resting scores. Fundamentally however, the detrimental impact of fatigue on passing performance is not as steep in the expert players compared to the novice players. The results suggest that expert or skilled players are better able to cope with both moderate and high intensity fatigue conditions and maintain a higher level of performance when compared to novice players. The findings of this research therefore, suggest the need for trainers and conditioning coaches in basketball to include moderate, but particularly high intensity exercise into their skills sessions. This specific training may enable players at all levels of the game to better cope with the demands of the game on court and maintain a higher standard of play

  12. Verification Account Management System (VAMS)

    Data.gov (United States)

    Social Security Administration — The Verification Account Management System (VAMS) is the centralized location for maintaining SSA's verification and data exchange accounts. VAMS account management...

  13. Hierarchical Representation Learning for Kinship Verification.

    Science.gov (United States)

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  14. High-accuracy determination of the neutron flux in the new experimental area nTOF-EAR2 at CERN

    Energy Technology Data Exchange (ETDEWEB)

    Sabate-Gilarte, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Barbagallo, M.; Colonna, N.; Damone, L.; Belloni, F.; Mastromarco, M.; Tagliente, G.; Variale, V. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari (Italy); Gunsing, F.; Berthoumieux, E.; Diakaki, M.; Papaevangelou, T.; Dupont, E. [Universite Paris-Saclay, CEA Irfu, Gif-sur-Yvette (France); Zugec, P.; Bosnar, D. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Vlachoudis, V.; Aberle, O.; Brugger, M.; Calviani, M.; Cardella, R.; Cerutti, F.; Chiaveri, E.; Ferrari, A.; Kadi, Y.; Losito, R.; Macina, D.; Montesano, S.; Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Chen, Y.H.; Audouin, L.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Stamatopoulos, A.; Kokkoris, M.; Tsinganis, A.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Lerendegui-Marco, J.; Cortes-Giraldo, M.A.; Guerrero, C.; Quesada, J.M. [Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Villacorta, A. [University of Salamanca, Salamanca (Spain); Cosentino, L.; Finocchiaro, P.; Piscopo, M. [INFN, Laboratori Nazionali del Sud, Catania (Italy); Musumarra, A. [INFN, Laboratori Nazionali del Sud, Catania (Italy); Universita di Catania, Dipartimento di Fisica, Catania (Italy); Andrzejewski, J.; Gawlik, A.; Marganiec, J.; Perkowski, J. [University of Lodz, Lodz (Poland); Becares, V.; Balibrea, J.; Cano-Ott, D.; Garcia, A.R.; Gonzalez, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Bacak, M.; Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Technische Universitaet Wien, Wien (Austria); Baccomi, R.; Milazzo, P.M. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (Italy); Barros, S.; Ferreira, P.; Goncalves, I.F.; Vaz, P. [Instituto Superior Tecnico, Lisbon (Portugal); Becvar, F.; Krticka, M.; Valenta, S. [Charles University, Prague (Czech Republic); Beinrucker, C.; Goebel, K.; Heftrich, T.; Reifarth, R.; Schmidt, S.; Weigand, M.; Wolf, C. [Goethe University Frankfurt, Frankfurt (Germany); Billowes, J.; Frost, R.J.W.; Ryan, J.A.; Smith, A.G.; Warren, S.; Wright, T. [University of Manchester, Manchester (United Kingdom); Caamano, M.; Deo, K.; Duran, I.; Fernandez-Dominguez, B.; Leal-Cidoncha, E.; Paradela, C.; Robles, M.S. [University of Santiago de Compostela, Santiago de Compostela (Spain); Calvino, F.; Casanovas, A.; Riego-Perez, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Castelluccio, D.M.; Lo Meo, S. [Agenzia Nazionale per le Nuove Tecnologie (ENEA), Bologna (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (Italy); Cortes, G.; Mengoni, A. [Agenzia Nazionale per le Nuove Tecnologie (ENEA), Bologna (Italy); Domingo-Pardo, C.; Tain, J.L. [Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Heinitz, S.; Kivel, N.; Maugeri, E.A.; Schumann, D. [Paul Scherrer Institut (PSI), Villingen (Switzerland); Furman, V.; Sedyshev, P. [Joint Institute for Nuclear Research (JINR), Dubna (Russian Federation); Gheorghe, I.; Glodariu, T.; Mirea, M.; Oprea, A. [Horia Hulubei National Institute of Physics and Nuclear Engineering, Magurele (Romania); Goverdovski, A.; Ketlerov, V.; Khryachkov, V. [Institute of Physics and Power Engineering (IPPE), Obninsk (Russian Federation); Griesmayer, E.; Jericha, E.; Kavrigin, P.; Leeb, H. [Technische Universitaet Wien, Wien (Austria); Harada, H.; Kimura, A. [Japan Atomic Energy Agency (JAEA), Tokai-mura (Japan); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Heyse, J.; Schillebeeckx, P. [European Commission, Joint Research Centre, Geel (BE); Jenkins, D.G. [University of York, York (GB); Kaeppeler, F. [Karlsruhe Institute of Technology, Karlsruhe (DE); Katabuchi, T. [Tokyo Institute of Technology, Tokyo (JP); Lederer, C.; Lonsdale, S.J.; Woods, P.J. [University of Edinburgh, School of Physics and Astronomy, Edinburgh (GB); Licata, M.; Massimi, C.; Vannini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (IT); Mastinu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Legnaro, Legnaro (IT); Matteucci, F. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (IT); Universita di Trieste, Dipartimento di Astronomia, Trieste (IT); Mingrone, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Nolte, R. [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (DE); Palomo-Pinto, F.R. [Universidad de Sevilla, Dept. Ingenieria Electronica, Escuela Tecnica Superior de Ingenieros, Sevilla (ES); Patronis, N. [University of Ioannina, Ioannina (GR); Pavlik, A. [University of Vienna, Faculty of Physics, Vienna (AT); Porras, J.I. [University of Granada, Granada (ES); Praena, J. [Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (ES); University of Granada, Granada (ES); Rajeev, K.; Rout, P.C.; Saxena, A.; Suryanarayana, S.V. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Rauscher, T. [University of Hertfordshire, Centre for Astrophysics Research, Hatfield (GB); University of Basel, Department of Physics, Basel (CH); Tarifeno-Saldivia, A. [Universitat Politecnica de Catalunya, Barcelona (ES); Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (ES); Ventura, A. [Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Wallner, A. [Australian National University, Canberra (AU)

    2017-10-15

    A new high flux experimental area has recently become operational at the nTOF facility at CERN. This new measuring station, nTOF-EAR2, is placed at the end of a vertical beam line at a distance of approximately 20 m from the spallation target. The characterization of the neutron beam, in terms of flux, spatial profile and resolution function, is of crucial importance for the feasibility study and data analysis of all measurements to be performed in the new area. In this paper, the measurement of the neutron flux, performed with different solid-state and gaseous detection systems, and using three neutron-converting reactions considered standard in different energy regions is reported. The results of the various measurements have been combined, yielding an evaluated neutron energy distribution in a wide energy range, from 2 meV to 100 MeV, with an accuracy ranging from 2%, at low energy, to 6% in the high-energy region. In addition, an absolute normalization of the nTOF-EAR2 neutron flux has been obtained by means of an activation measurement performed with {sup 197}Au foils in the beam. (orig.)

  15. Verification and the safeguards legacy

    International Nuclear Information System (INIS)

    Perricos, Demetrius

    2001-01-01

    A number of inspection or monitoring systems throughout the world over the last decades have been structured drawing upon the IAEA experience of setting up and operating its safeguards system. The first global verification system was born with the creation of the IAEA safeguards system, about 35 years ago. With the conclusion of the NPT in 1968, inspections were to be performed under safeguards agreements, concluded directly between the IAEA and non-nuclear weapon states parties to the Treaty. The IAEA developed the safeguards system within the limitations reflected in the Blue Book (INFCIRC 153), such as limitations of routine access by the inspectors to 'strategic points', including 'key measurement points', and the focusing of verification on declared nuclear material in declared installations. The system, based as it was on nuclear material accountancy. It was expected to detect a diversion of nuclear material with a high probability and within a given time and therefore determine also that there had been no diversion of nuclear material from peaceful purposes. The most vital element of any verification system is the inspector. Technology can assist but cannot replace the inspector in the field. Their experience, knowledge, intuition and initiative are invaluable factors contributing to the success of any inspection regime. The IAEA inspectors are however not part of an international police force that will intervene to prevent a violation taking place. To be credible they should be technically qualified with substantial experience in industry or in research and development before they are recruited. An extensive training program has to make sure that the inspectors retain their professional capabilities and that it provides them with new skills. Over the years, the inspectors and through them the safeguards verification system gained experience in: organization and management of large teams; examination of records and evaluation of material balances

  16. Development of High-Reflectivity Optical Coatings for the Vacuum Ultraviolet and Verification on a Sounding Rocket Flight

    Data.gov (United States)

    National Aeronautics and Space Administration — We desire to develop new thin film coatings of fluorides to utilize the high intrinsic reflectivity of aluminum. Highly controllable thickness of fluorides can be...

  17. Verification and quality control of routine hematology analyzers.

    Science.gov (United States)

    Vis, J Y; Huisman, A

    2016-05-01

    Verification of hematology analyzers (automated blood cell counters) is mandatory before new hematology analyzers may be used in routine clinical care. The verification process consists of several items which comprise among others: precision, accuracy, comparability, carryover, background and linearity throughout the expected range of results. Yet, which standard should be met or which verification limit be used is at the discretion of the laboratory specialist. This paper offers practical guidance on verification and quality control of automated hematology analyzers and provides an expert opinion on the performance standard that should be met by the contemporary generation of hematology analyzers. Therefore (i) the state-of-the-art performance of hematology analyzers for complete blood count parameters is summarized, (ii) considerations, challenges, and pitfalls concerning the development of a verification plan are discussed, (iii) guidance is given regarding the establishment of reference intervals, and (iv) different methods on quality control of hematology analyzers are reviewed. © 2016 John Wiley & Sons Ltd.

  18. On-line high-performance liquid chromatography-ultraviolet-nuclear magnetic resonance method of the markers of nerve agents for verification of the Chemical Weapons Convention.

    Science.gov (United States)

    Mazumder, Avik; Gupta, Hemendra K; Garg, Prabhat; Jain, Rajeev; Dubey, Devendra K

    2009-07-03

    This paper details an on-flow liquid chromatography-ultraviolet-nuclear magnetic resonance (LC-UV-NMR) method for the retrospective detection and identification of alkyl alkylphosphonic acids (AAPAs) and alkylphosphonic acids (APAs), the markers of the toxic nerve agents for verification of the Chemical Weapons Convention (CWC). Initially, the LC-UV-NMR parameters were optimized for benzyl derivatives of the APAs and AAPAs. The optimized parameters include stationary phase C(18), mobile phase methanol:water 78:22 (v/v), UV detection at 268nm and (1)H NMR acquisition conditions. The protocol described herein allowed the detection of analytes through acquisition of high quality NMR spectra from the aqueous solution of the APAs and AAPAs with high concentrations of interfering background chemicals which have been removed by preceding sample preparation. The reported standard deviation for the quantification is related to the UV detector which showed relative standard deviations (RSDs) for quantification within +/-1.1%, while lower limit of detection upto 16mug (in mug absolute) for the NMR detector. Finally the developed LC-UV-NMR method was applied to identify the APAs and AAPAs in real water samples, consequent to solid phase extraction and derivatization. The method is fast (total experiment time approximately 2h), sensitive, rugged and efficient.

  19. High Accuracy Potential Energy Surface, Dipole Moment Surface, Rovibrational Energies and Line List Calculations for ^{14}NH_3

    Science.gov (United States)

    Coles, Phillip; Yurchenko, Sergei N.; Polyansky, Oleg; Kyuberis, Aleksandra; Ovsyannikov, Roman I.; Zobov, Nikolay Fedorovich; Tennyson, Jonathan

    2017-06-01

    We present a new spectroscopic potential energy surface (PES) for ^{14}NH_3, produced by refining a high accuracy ab initio PES to experimental energy levels taken predominantly from MARVEL. The PES reproduces 1722 matched J=0-8 experimental energies with a root-mean-square error of 0.035 cm-1 under 6000 cm^{-1} and 0.059 under 7200 cm^{-1}. In conjunction with a new DMS calculated using multi reference configuration interaction (MRCI) and H=aug-cc-pVQZ, N=aug-cc-pWCVQZ basis sets, an infrared (IR) line list has been computed which is suitable for use up to 2000 K. The line list is used to assign experimental lines in the 7500 - 10,500 cm^{-1} region and previously unassigned lines in HITRAN in the 6000-7000 cm^{-1} region. Oleg L. Polyansky, Roman I. Ovsyannikov, Aleksandra A. Kyuberis, Lorenzo Lodi, Jonathan Tennyson, Andrey Yachmenev, Sergei N. Yurchenko, Nikolai F. Zobov, J. Mol. Spec., 327 (2016) 21-30 Afaf R. Al Derzia, Tibor Furtenbacher, Jonathan Tennyson, Sergei N. Yurchenko, Attila G. Császár, J. Quant. Spectrosc. Rad. Trans., 161 (2015) 117-130

  20. Emergence of realism: Enhanced visual artistry and high accuracy of visual numerosity representation after left prefrontal damage.

    Science.gov (United States)

    Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro

    2014-05-01

    Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. High-accuracy measurements of snow Bidirectional Reflectance Distribution Function at visible and NIR wavelengths – comparison with modelling results

    Directory of Open Access Journals (Sweden)

    M. Dumont

    2010-03-01

    Full Text Available High-accuracy measurements of snow Bidirectional Reflectance Distribution Function (BRDF were performed for four natural snow samples with a spectrogonio-radiometer in the 500–2600 nm wavelength range. These measurements are one of the first sets of direct snow BRDF values over a wide range of lighting and viewing geometry. They were compared to BRDF calculated with two optical models. Variations of the snow anisotropy factor with lighting geometry, wavelength and snow physical properties were investigated. Results show that at wavelengths with small penetration depth, scattering mainly occurs in the very top layers and the anisotropy factor is controlled by the phase function. In this condition, forward scattering peak or double scattering peak is observed. In contrast at shorter wavelengths, the penetration of the radiation is much deeper and the number of scattering events increases. The anisotropy factor is thus nearly constant and decreases at grazing observation angles. The whole dataset is available on demand from the corresponding author.

  2. FMCT verification: Case studies

    International Nuclear Information System (INIS)

    Hui Zhang

    2001-01-01

    Full text: How to manage the trade-off between the need for transparency and the concern about the disclosure of sensitive information would be a key issue during the negotiations of FMCT verification provision. This paper will explore the general concerns on FMCT verification; and demonstrate what verification measures might be applied to those reprocessing and enrichment plants. A primary goal of an FMCT will be to have the five declared nuclear weapon states and the three that operate unsafeguarded nuclear facilities become parties. One focus in negotiating the FMCT will be verification. Appropriate verification measures should be applied in each case. Most importantly, FMCT verification would focus, in the first instance, on these states' fissile material production facilities. After the FMCT enters into force, all these facilities should be declared. Some would continue operating to produce civil nuclear power or to produce fissile material for non- explosive military uses. The verification measures necessary for these operating facilities would be essentially IAEA safeguards, as currently being applied to non-nuclear weapon states under the NPT. However, some production facilities would be declared and shut down. Thus, one important task of the FMCT verifications will be to confirm the status of these closed facilities. As case studies, this paper will focus on the verification of those shutdown facilities. The FMCT verification system for former military facilities would have to differ in some ways from traditional IAEA safeguards. For example, there could be concerns about the potential loss of sensitive information at these facilities or at collocated facilities. Eventually, some safeguards measures such as environmental sampling might be seen as too intrusive. Thus, effective but less intrusive verification measures may be needed. Some sensitive nuclear facilities would be subject for the first time to international inspections, which could raise concerns

  3. Determining the Accuracy of Paleomagnetic Remanence and High-Resolution Chronostratigraphy for Sedimentary Rocks using Rock Magnetics

    Science.gov (United States)

    Kodama, K. P.

    2017-12-01

    The talk will consider two broad topics in rock magnetism and paleomagnetism: the accuracy of paleomagnetic remanence and the use of rock magnetics to measure geologic time in sedimentary sequences. The accuracy of the inclination recorded by sedimentary rocks is crucial to paleogeographic reconstructions. Laboratory compaction experiments show that inclination shallows on the order of 10˚-15˚. Corrections to the inclination can be made using the effects of compaction on the directional distribution of secular variation recorded by sediments or the anisotropy of the magnetic grains carrying the ancient remanence. A summary of all the compaction correction studies as of 2012 shows that 85% of sedimentary rocks studied have enjoyed some amount of inclination shallowing. Future work should also consider the effect of grain-scale strain on paleomagnetic remanence. High resolution chronostratigraphy can be assigned to a sedimentary sequence using rock magnetics to detect astronomically-forced climate cycles. The power of the technique is relatively quick, non-destructive measurements, the objective identification of the cycles compared to facies interpretations, and the sensitivity of rock magnetics to subtle changes in sedimentary source. An example of this technique comes from using rock magnetics to identify astronomically-forced climate cycles in three globally distributed occurrences of the Shuram carbon isotope excursion. The Shuram excursion may record the oxidation of the world ocean in the Ediacaran, just before the Cambrian explosion of metazoans. Using rock magnetic cyclostratigraphy, the excursion is shown to have the same duration (8-9 Myr) in southern California, south China and south Australia. Magnetostratigraphy of the rocks carrying the excursion in California and Australia shows a reversed to normal geomagnetic field polarity transition at the excursion's nadir, thus supporting the synchroneity of the excursion globally. Both results point to a

  4. Linear Discriminant Analysis achieves high classification accuracy for the BOLD fMRI response to naturalistic movie stimuli.

    Directory of Open Access Journals (Sweden)

    Hendrik eMandelkow

    2016-03-01

    Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  5. Advanced verification topics

    CERN Document Server

    Bhattacharya, Bishnupriya; Hall, Gary; Heaton, Nick; Kashai, Yaron; Khan Neyaz; Kirshenbaum, Zeev; Shneydor, Efrat

    2011-01-01

    The Accellera Universal Verification Methodology (UVM) standard is architected to scale, but verification is growing and in more than just the digital design dimension. It is growing in the SoC dimension to include low-power and mixed-signal and the system integration dimension to include multi-language support and acceleration. These items and others all contribute to the quality of the SOC so the Metric-Driven Verification (MDV) methodology is needed to unify it all into a coherent verification plan. This book is for verification engineers and managers familiar with the UVM and the benefits it brings to digital verification but who also need to tackle specialized tasks. It is also written for the SoC project manager that is tasked with building an efficient worldwide team. While the task continues to become more complex, Advanced Verification Topics describes methodologies outside of the Accellera UVM standard, but that build on it, to provide a way for SoC teams to stay productive and profitable.

  6. Diagnostic accuracy of cone-beam computed tomography scans with high- and low-resolution modes for the detection of root perforations.

    Science.gov (United States)

    Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza

    2018-03-01

    This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.

  7. Xpert MTB/RIF testing in a low tuberculosis incidence, high-resource setting: limitations in accuracy and clinical impact.

    Science.gov (United States)

    Sohn, Hojoon; Aero, Abebech D; Menzies, Dick; Behr, Marcel; Schwartzman, Kevin; Alvarez, Gonzalo G; Dan, Andrei; McIntosh, Fiona; Pai, Madhukar; Denkinger, Claudia M

    2014-04-01

    Xpert MTB/RIF, the first automated molecular test for tuberculosis, is transforming the diagnostic landscape in low-income countries. However, little information is available on its performance in low-incidence, high-resource countries. We evaluated the accuracy of Xpert in a university hospital tuberculosis clinic in Montreal, Canada, for the detection of pulmonary tuberculosis on induced sputum samples, using mycobacterial cultures as the reference standard. We also assessed the potential reduction in time to diagnosis and treatment initiation. We enrolled 502 consecutive patients who presented for evaluation of possible active tuberculosis (most with abnormal chest radiographs, only 18% symptomatic). Twenty-five subjects were identified to have active tuberculosis by culture. Xpert had a sensitivity of 46% (95% confidence interval [CI], 26%-67%) and specificity of 100% (95% CI, 99%-100%) for detection of Mycobacterium tuberculosis. Sensitivity was 86% (95% CI, 42%-100%) in the 7 subjects with smear-positive results, and 28% (95% CI, 10%-56%) in the remaining subjects with smear-negative, culture-positive results; in this latter group, positive Xpert results were obtained a median 12 days before culture results. Subjects with positive cultures but negative Xpert results had minimal disease: 11 of 13 had no symptoms on presentation, and mean time to positive liquid culture results was 28 days (95% CI, 25-47 days) compared with 14 days (95% CI, 8-21 days) in Xpert/culture-positive cases. Our findings suggest limited potential impact of Xpert testing in high-resource, low-incidence ambulatory settings due to lower sensitivity in the context of less extensive disease, and limited potential to expedite diagnosis beyond what is achieved with the existing, well-performing diagnostic algorithm.

  8. A simple differential steady-state method to measure the thermal conductivity of solid bulk materials with high accuracy.

    Science.gov (United States)

    Kraemer, D; Chen, G

    2014-02-01

    Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.

  9. Verification of SIGACE code for generating ACE format cross-section files with continuous energy at high temperature

    International Nuclear Information System (INIS)

    Li Zhifeng; Yu Tao; Xie Jinsen; Qin Mian

    2012-01-01

    Based on the recently released ENDF/B-VII. 1 library, high temperature neutron cross-section files are generated through SIGACE code using low temperature ACE format files. To verify the processed ACE file of SIGACE, benchmark calculations are performed in this paper. The calculated results of selected ICT, standard CANDU assembly, LWR Doppler coefficient and SEFOR benchmarks are well conformed with reference value, which indicates that high temperature ACE files processed by SIGACE can be used in related neutronics calculations. (authors)

  10. Development of a FBR fuel bundle-duct interaction analysis code-BAMBOO. Analysis model and verification by Phenix high burn-up fuel subassemblies

    International Nuclear Information System (INIS)

    Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu

    2005-01-01

    The bundle-duct interaction analysis code ''BAMBOO'' has been developed for the purpose of predicting deformation of a wire-wrapped fuel pin bundle of a fast breeder reactor (FBR). The BAMBOO code calculates helical bowing and oval-distortion of all the fuel pins in a fuel subassembly. We developed deformation models in order to precisely analyze the irradiation induced deformation by the code: a model to analyze fuel pin self-bowing induced by circumferential gradient of void swelling as well as thermal expansion, and a model to analyze dispersion of the orderly arrangement of a fuel pin bundle. We made deformation analyses of high burn-up fuel subassemblies in Phenix reactor and compared the calculated results with the post irradiation examination data of these subassemblies for the verification of these models. From the comparison we confirmed that the calculated values of the oval-distortion and bowing reasonably agreed with the PIE results if these models were used in the analysis of the code. (author)

  11. Standard Verification System (SVS)

    Data.gov (United States)

    Social Security Administration — SVS is a mainframe program that accesses the NUMIDENT to perform SSN verifications. This program is called by SSA Internal applications to verify SSNs. There is also...

  12. Formal Verification -26 ...

    Indian Academy of Sciences (India)

    by testing of the components and successful testing leads to the software being ... Formal verification is based on formal methods which are mathematically based ..... scenario under which a similar error could occur. There are various other ...

  13. SSN Verification Service

    Data.gov (United States)

    Social Security Administration — The SSN Verification Service is used by Java applications to execute the GUVERF02 service using the WebSphere/CICS Interface. It accepts several input data fields...

  14. Environmental technology verification methods

    CSIR Research Space (South Africa)

    Szewczuk, S

    2016-03-01

    Full Text Available Environmental Technology Verification (ETV) is a tool that has been developed in the United States of America, Europe and many other countries around the world to help innovative environmental technologies reach the market. Claims about...

  15. Verification of RADTRAN

    International Nuclear Information System (INIS)

    Kanipe, F.L.; Neuhauser, K.S.

    1995-01-01

    This document presents details of the verification process of the RADTRAN computer code which was established for the calculation of risk estimates for radioactive materials transportation by highway, rail, air, and waterborne modes

  16. High-throughput microsatellite genotyping in ecology: improved accuracy, efficiency, standardization and success with low-quantity and degraded DNA.

    Science.gov (United States)

    De Barba, M; Miquel, C; Lobréaux, S; Quenette, P Y; Swenson, J E; Taberlet, P

    2017-05-01

    Microsatellite markers have played a major role in ecological, evolutionary and conservation research during the past 20 years. However, technical constrains related to the use of capillary electrophoresis and a recent technological revolution that has impacted other marker types have brought to question the continued use of microsatellites for certain applications. We present a study for improving microsatellite genotyping in ecology using high-throughput sequencing (HTS). This approach entails selection of short markers suitable for HTS, sequencing PCR-amplified microsatellites on an Illumina platform and bioinformatic treatment of the sequence data to obtain multilocus genotypes. It takes advantage of the fact that HTS gives direct access to microsatellite sequences, allowing unambiguous allele identification and enabling automation of the genotyping process through bioinformatics. In addition, the massive parallel sequencing abilities expand the information content of single experimental runs far beyond capillary electrophoresis. We illustrated the method by genotyping brown bear samples amplified with a multiplex PCR of 13 new microsatellite markers and a sex marker. HTS of microsatellites provided accurate individual identification and parentage assignment and resulted in a significant improvement of genotyping success (84%) of faecal degraded DNA and costs reduction compared to capillary electrophoresis. The HTS approach holds vast potential for improving success, accuracy, efficiency and standardization of microsatellite genotyping in ecological and conservation applications, especially those that rely on profiling of low-quantity/quality DNA and on the construction of genetic databases. We discuss and give perspectives for the implementation of the method in the light of the challenges encountered in wildlife studies. © 2016 John Wiley & Sons Ltd.

  17. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  18. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  19. High-accuracy single-pass InSAR DEM for large-scale flood hazard applications

    Science.gov (United States)

    Schumann, G.; Faherty, D.; Moller, D.

    2017-12-01

    In this study, we used a unique opportunity of the GLISTIN-A (NASA airborne mission designed to characterizing the cryosphere) track to Greenland to acquire a high-resolution InSAR DEM of a large area in the Red River of the North Basin (north of Grand Forks, ND, USA), which is a very flood-vulnerable valley, particularly in spring time due to increased soil moisture content near state of saturation and/or, typical for this region, snowmelt. Having an InSAR DEM that meets flood inundation modeling and mapping requirements comparable to LiDAR, would demonstrate great application potential of new radar technology for national agencies with an operational flood forecasting mandate and also local state governments active in flood event prediction, disaster response and mitigation. Specifically, we derived a bare-earth DEM in SAR geometry by first removing the inherent far range bias related to airborne operation, which at the more typical large-scale DEM resolution of 30 m has a sensor accuracy of plus or minus 2.5 cm. Subsequently, an intelligent classifier based on informed relationships between InSAR height, intensity and correlation was used to distinguish between bare-earth, roads or embankments, buildings and tall vegetation in order to facilitate the creation of a bare-earth DEM that would meet the requirements for accurate floodplain inundation mapping. Using state-of-the-art LiDAR terrain data, we demonstrate that capability by achieving a root mean squared error of approximately 25 cm and further illustrating its applicability to flood modeling.

  20. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tome, Wolfgang A. [Department of Human Oncology, University of Wisconsin-Madison, WI, 53792 (United States); Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC 3002 (Australia) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Human Oncology, University of Wisconsin-Madison, WI 53792 (United States); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia) and Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur (Malaysia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia); Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Einstein Institute of Oncophysics, Albert Einstein College of Medicine of Yeshiva University, Bronx, New York 10461 (United States) and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW 2522 (Australia)

    2012-08-15

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.

  1. Improvements in dose calculation accuracy for small off-axis targets in high dose per fraction tomotherapy

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding; Rosenfeld, Anatoly B.; Tomé, Wolfgang A.

    2012-01-01

    Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed “Super Sampling” involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receiving a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.

  2. Study on the seismic verification test program on the experimental multi-purpose high-temperature gas cooled reactor core

    International Nuclear Information System (INIS)

    Taketani, K.; Aochi, T.; Yasuno, T.; Ikushima, T.; Shiraki, K.; Honma, T.; Kawamura, N.

    1978-01-01

    The paper describes a program of experimental research necessary for qualitative and quantitative determination of vibration characteristics and aseismic safety on structure of reactor core in the multipurpose high temperature gas-cooled experimental reactor (VHTR Experimental Reactor) by the Japan Atomic Energy Research Institute

  3. The NRC measurement verification program

    International Nuclear Information System (INIS)

    Pham, T.N.; Ong, L.D.Y.

    1995-01-01

    A perspective is presented on the US Nuclear Regulatory Commission (NRC) approach for effectively monitoring the measurement methods and directly testing the capability and performance of licensee measurement systems. A main objective in material control and accounting (MC and A) inspection activities is to assure the accuracy and precision of the accounting system and the absence of potential process anomalies through overall accountability. The primary means of verification remains the NRC random sampling during routine safeguards inspections. This involves the independent testing of licensee measurement performance with statistical sampling plans for physical inventories, item control, and auditing. A prospective cost-effective alternative overcheck is also discussed in terms of an externally coordinated sample exchange or ''round robin'' program among participating fuel cycle facilities in order to verify the quality of measurement systems, i.e., to assure that analytical measurement results are free of bias

  4. Multilateral disarmament verification

    International Nuclear Information System (INIS)

    Persbo, A.

    2013-01-01

    Non-governmental organisations, such as VERTIC (Verification Research, Training and Information Centre), can play an important role in the promotion of multilateral verification. Parties involved in negotiating nuclear arms accords are for the most part keen that such agreements include suitable and robust provisions for monitoring and verification. Generally progress in multilateral arms control verification is often painstakingly slow, but from time to time 'windows of opportunity' - that is, moments where ideas, technical feasibility and political interests are aligned at both domestic and international levels - may occur and we have to be ready, so the preparatory work is very important. In the context of nuclear disarmament, verification (whether bilateral or multilateral) entails an array of challenges, hurdles and potential pitfalls relating to national security, health, safety and even non-proliferation, so preparatory work is complex and time-greedy. A UK-Norway Initiative was established in order to investigate the role that a non-nuclear-weapon state such as Norway could potentially play in the field of nuclear arms control verification. (A.C.)

  5. Simulation of high SNR photodetector with L-C coupling and transimpedance amplifier circuit and its verification

    Science.gov (United States)

    Wang, Shaofeng; Xiang, Xiao; Zhou, Conghua; Zhai, Yiwei; Quan, Runai; Wang, Mengmeng; Hou, Feiyan; Zhang, Shougang; Dong, Ruifang; Liu, Tao

    2017-01-01

    In this paper, a model for simulating the optical response and noise performances of photodetectors with L-C coupling and transimpedance amplification circuit is presented. To verify the simulation, two kinds of photodetectors, which are based on the same printed-circuit-board (PCB) designing and PIN photodiode but different operational amplifiers, are developed and experimentally investigated. Through the comparisons between the numerical simulation results and the experimentally obtained data, excellent agreements are achieved, which show that the model provides a highly efficient guide for the development of a high signal to noise ratio photodetector. Furthermore, the parasite capacitances on the developed PCB, which are always hardly measured but play a non-negligible influence on the photodetectors' performances, are estimated.

  6. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2011-09-01

    Full Text Available We present a strategy (MIR-GBM v1.0 for the retrieval of column-averaged dry-air mole fractions of methane (XCH4 with a precision <0.3% (1-σ diurnal variation, 7-min integration and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations. This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON with time series dating back 15 years or so before TCCON operations began.

    MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release and 3 spectral micro windows (2613.70–2615.40 cm−1, 2835.50–2835.80 cm−1, 2921.00–2921.60 cm−1. A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1 as well as for the ratio of root-mean-square spectral noise and information content (<0.15%. Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP interpolated to the time of measurement.

    MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008. Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference

  7. Testing and Performance Verification of a High Bypass Ratio Turbofan Rotor in an Internal Flow Component Test Facility

    Science.gov (United States)

    VanZante, Dale E.; Podboy, Gary G.; Miller, Christopher J.; Thorp, Scott A.

    2009-01-01

    A 1/5 scale model rotor representative of a current technology, high bypass ratio, turbofan engine was installed and tested in the W8 single-stage, high-speed, compressor test facility at NASA Glenn Research Center (GRC). The same fan rotor was tested previously in the GRC 9x15 Low Speed Wind Tunnel as a fan module consisting of the rotor and outlet guide vanes mounted in a flight-like nacelle. The W8 test verified that the aerodynamic performance and detailed flow field of the rotor as installed in W8 were representative of the wind tunnel fan module installation. Modifications to W8 were necessary to ensure that this internal flow facility would have a flow field at the test package that is representative of flow conditions in the wind tunnel installation. Inlet flow conditioning was designed and installed in W8 to lower the fan face turbulence intensity to less than 1.0 percent in order to better match the wind tunnel operating environment. Also, inlet bleed was added to thin the casing boundary layer to be more representative of a flight nacelle boundary layer. On the 100 percent speed operating line the fan pressure rise and mass flow rate agreed with the wind tunnel data to within 1 percent. Detailed hot film surveys of the inlet flow, inlet boundary layer and fan exit flow were compared to results from the wind tunnel. The effect of inlet casing boundary layer thickness on fan performance was quantified. Challenges and lessons learned from testing this high flow, low static pressure rise fan in an internal flow facility are discussed.

  8. Train-Network Interactions and Stability Evaluation in High-Speed Railways--Part II: Influential Factors and Verifications

    DEFF Research Database (Denmark)

    Hu, Haitao; Tao, Haidong; Wang, Xiongfei

    2018-01-01

    Low-frequency oscillation (LFO), harmonic resonance and resonance instability phenomena happened in high speed railways (HSRs) are resulted from the interactions between multiple electric trains and traction network. A train-network interaction system and a unified impedance-based model......, catenary lines and autotransformers (ATs); 3) different numbers and positions of trains and railway lines will also be considered and discussed. In order to validate the theoretical results, the time-domain simulation and experiment system have been conducted. Finally, the differences and the relations...

  9. Concepts for inventory verification in critical facilities

    International Nuclear Information System (INIS)

    Cobb, D.D.; Sapir, J.L.; Kern, E.A.; Dietz, R.J.

    1978-12-01

    Materials measurement and inventory verification concepts for safeguarding large critical facilities are presented. Inspection strategies and methods for applying international safeguards to such facilities are proposed. The conceptual approach to routine inventory verification includes frequent visits to the facility by one inspector, and the use of seals and nondestructive assay (NDA) measurements to verify the portion of the inventory maintained in vault storage. Periodic verification of the reactor inventory is accomplished by sampling and NDA measurement of in-core fuel elements combined with measurements of integral reactivity and related reactor parameters that are sensitive to the total fissile inventory. A combination of statistical sampling and NDA verification with measurements of reactor parameters is more effective than either technique used by itself. Special procedures for assessment and verification for abnormal safeguards conditions are also considered. When the inspection strategies and inventory verification methods are combined with strict containment and surveillance methods, they provide a high degree of assurance that any clandestine attempt to divert a significant quantity of fissile material from a critical facility inventory will be detected. Field testing of specific hardware systems and procedures to determine their sensitivity, reliability, and operational acceptability is recommended. 50 figures, 21 tables

  10. PROJECT-SPECIFIC TYPE A VERIFICATION FOR THE HIGH FLUX BEAM REACTOR UNDERGROUND UTILITIES REMOVAL PHASE 3 TRENCH 1, BROOKHAVEN NATIONAL LABORATORY UPTON, NEW YORK

    International Nuclear Information System (INIS)

    Harpenau, E.M.

    2010-01-01

    The Oak Ridge Institute for Science and Education (ORISE) has reviewed the project documentation and data for the High Flux Beam Reactor (HFBR) Underground Utilities removal Phase 3; Trench 1 at Brookhaven National Laboratory (BNL) in Upton, New York. The Brookhaven Survey Group (BSG) has completed removal and performed Final Status Survey (FSS) of the 42-inch duct and 14-inch line in Trench 1 from Building 801 to the Stack. Sample results have been submitted as required to demonstrate that the cleanup goal of (le)15 mrem/yr above background to a resident in 50 years has been met. Four rounds of sampling, from pre-excavation to FSS, were performed as specified in the Field Sampling Plan (FSP) (BNL 2010a). It is the policy of the U.S. Department of Energy (DOE) to perform independent verifications of decontamination and decommissioning activities conducted at DOE facilities. ORISE has been designated as the organization responsible for this task for the HFBR Underground Utilities. ORISE, together with DOE, determined that a Type A verification of Trench 1 was appropriate based on recent verification results from Trenches 2, 3, 4, and 5, and the minimal potential for residual radioactivity in the area. The removal of underground utilities has been performed in three stages to decommission the HFBR facility and support structures. Phase 3 of this project included the removal of at least 200 feet of 36-inch to 42-inch duct from the west side to the south side of Building 801, and the 14-inch diameter Acid Waste Line that spanned from 801 to the Stack within Trench 1. Based on the pre-excavation sample results of the soil overburden, the potential for contamination of the soil surrounding the pipe is minimal (BNL 2010a). ORISE reviewed the gamma spectroscopy results for 14 FSS soil samples, four core samples, and one duplicate sample collected from Trench 1. Sample results for the radionuclides of concern were below the established cleanup goals. However, in sample PH-3

  11. High-accuracy and high-sensitivity spectroscopic measurement of dinitrogen pentoxide (N2O5) in an atmospheric simulation chamber using a quantum cascade laser.

    Science.gov (United States)

    Yi, Hongming; Wu, Tao; Lauraguais, Amélie; Semenov, Vladimir; Coeur, Cecile; Cassez, Andy; Fertein, Eric; Gao, Xiaoming; Chen, Weidong

    2017-12-04

    A spectroscopic instrument based on a mid-infrared external cavity quantum cascade laser (EC-QCL) was developed for high-accuracy measurements of dinitrogen pentoxide (N 2 O 5 ) at the ppbv-level. A specific concentration retrieval algorithm was developed to remove, from the broadband absorption spectrum of N 2 O 5 , both etalon fringes resulting from the EC-QCL intrinsic structure and spectral interference lines of H 2 O vapour absorption, which led to a significant improvement in measurement accuracy and detection sensitivity (by a factor of 10), compared to using a traditional algorithm for gas concentration retrieval. The developed EC-QCL-based N 2 O 5 sensing platform was evaluated by real-time tracking N 2 O 5 concentration in its most important nocturnal tropospheric chemical reaction of NO 3 + NO 2 ↔ N 2 O 5 in an atmospheric simulation chamber. Based on an optical absorption path-length of L eff = 70 m, a minimum detection limit of 15 ppbv was achieved with a 25 s integration time and it was down to 3 ppbv in 400 s. The equilibrium rate constant K eq involved in the above chemical reaction was determined with direct concentration measurements using the developed EC-QCL sensing platform, which was in good agreement with the theoretical value deduced from a referenced empirical formula under well controlled experimental conditions. The present work demonstrates the potential and the unique advantage of the use of a modern external cavity quantum cascade laser for applications in direct quantitative measurement of broadband absorption of key molecular species involved in chemical kinetic and climate-change related tropospheric chemistry.

  12. Outlines and verifications of the codes used in the safety analysis of High Temperature Engineering Test Reactor (HTTR)

    International Nuclear Information System (INIS)

    Shiina, Yasuaki; Kunitomi, Kazuhiko; Maruyama, Soh; Fujita, Shigeki; Nakagawa, Shigeaki; Iyoku, Tatsuo; Shindoh, Masami; Sudo, Yukio; Hirano, Masashi.

    1990-03-01

    This paper presents brief description of the computer codes used in the safety analysis of High Temperature Engineering Test Reactor. The list of the codes is: 1. BLOOST-J2 2. THYDE-HTGR 3. TAC-NC 4. RATSAM6 5. COMPARE-MOD1 6. GRACE 7. OXIDE-3F 8. FLOWNET/TRUMP. Of described above, 1, 3, 4, 5, 6 and 7 were developed for the multi-hole type gas cooled reactor and improved for HTTR and 2 was originated by THYDE-codes which were developed to treat the transient thermo-hydraulics during LOCA of LWR. Each code adopted the models and properties which yield conservative analytical results. Adequacy of each code was verified by the comparison with the experimental results and/or the analytical results obtained from the other codes which were already proven. (author)

  13. A verification of the high density after contrast enhancement in the 2nd week in cerebroischemic lesion

    International Nuclear Information System (INIS)

    Shibata, Taichiro; Kanno, Tetsuo; Sano, Hirotoshi; Katada, Kazuhiro; Fujimoto, Kazuo

    1978-01-01

    To determine the indication, it is necessary to make clear the relation among the Stage (time and course), the Strength, the Pathogenesis, and the Effects of the operation in these diseases (SSPE relation). In this report, we focused on the High Density of CT after the contrast enhancement in the cases of ischemic lesions (the High Density was named ''Ribbon H. D.''). Seventeen cases of Ribbon H. D. in fresh infarctions were verified concerning the time of the appearance of the H. D., the features of its location and nature, and the histological findings. The results were as follows: The Ribbon H. D. appeared in the early stage of infarctions, and had its peak density at the end of the 2nd week after the onset. The Ribbon H. D. was mostly located along the cortical line, showing a ribbon-like band. The Ribbon H. D. did not appear in the sharply demarcated coagulation necrosis in the early stage or in the defined Low Density (L. D.) in the late stage of infarctions. Although the Ribbon H. D. shows the extravasation of contrast media, it does not necessarily show the existence of the hemorrhagic infarction. Some part of the Ribbon H. D. changes to a well-defined L. D. and the rest of the part becomes relative isodensity in the late stage. This change corresponds to the change in the incomplete necrosis which is afterwards divided into a resolution with a cystic cavity and the glial replacement in the late stage. In conclusion, it is possible to understand that the Ribbon H. D. corresponds to the lesion of an incomplete necrosis, with neovascularization, in the early stage of infarctions. Therefore, in addition to the present indication of a by-pass operation (TIA, RIND), this incomplete necrosis (Ribbon H. D.), its surrounding area and just before the appearance of the Ribbon H. D. might be another indication of the operation. (author)

  14. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    Science.gov (United States)

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  15. Transformation Model with Constraints for High-Accuracy of 2D-3D Building Registration in Aerial Imagery

    Directory of Open Access Journals (Sweden)

    Guoqing Zhou

    2016-06-01

    Full Text Available This paper proposes a novel rigorous transformation model for 2D-3D registration to address the difficult problem of obtaining a sufficient number of well-distributed ground control points (GCPs in urban areas with tall buildings. The proposed model applies two types of geometric constraints, co-planarity and perpendicularity, to the conventional photogrammetric collinearity model. Both types of geometric information are directly obtained from geometric building structures, with which the geometric constraints are automatically created and combined into the conventional transformation model. A test field located in downtown Denver, Colorado, is used to evaluate the accuracy and reliability of the proposed method. The comparison analysis of the accuracy achieved by the proposed method and the conventional method is conducted. Experimental results demonstrated that: (1 the theoretical accuracy of the solved registration parameters can reach 0.47 pixels, whereas the other methods reach only 1.23 and 1.09 pixels; (2 the RMS values of 2D-3D registration achieved by the proposed model are only two pixels along the x and y directions, much smaller than the RMS values of the conventional model, which are approximately 10 pixels along the x and y directions. These results demonstrate that the proposed method is able to significantly improve the accuracy of 2D-3D registration with much fewer GCPs in urban areas with tall buildings.

  16. A safeguards verification technique for solution homogeneity and volume measurements in process tanks

    International Nuclear Information System (INIS)

    Suda, S.; Franssen, F.

    1987-01-01

    A safeguards verification technique is being developed for determining whether process-liquid homogeneity has been achieved in process tanks and for authenticating volume-measurement algorithms involving temperature corrections. It is proposed that, in new designs for bulk-handling plants employing automated process lines, bubbler probes and thermocouples be installed at several heights in key accountability tanks. High-accuracy measurements of density using an electromanometer can now be made which match or even exceed analytical-laboratory accuracies. Together with regional determination of tank temperatures, these measurements provide density, liquid-column weight and temperature gradients over the fill range of the tank that can be used to ascertain when the tank solution has reached equilibrium. Temperature-correction algorithms can be authenticated by comparing the volumes obtained from the several bubbler-probe liquid-height measurements, each based on different amounts of liquid above and below the probe. The verification technique is based on the automated electromanometer system developed by Brookhaven National Laboratory (BNL). The IAEA has recently approved the purchase of a stainless-steel tank equipped with multiple bubbler and thermocouple probes for installation in its Bulk Calibration Laboratory at IAEA Headquarters, Vienna. The verification technique is scheduled for preliminary trials in late 1987

  17. Embedded software verification and debugging

    CERN Document Server

    Winterholer, Markus

    2017-01-01

    This book provides comprehensive coverage of verification and debugging techniques for embedded software, which is frequently used in safety critical applications (e.g., automotive), where failures are unacceptable. Since the verification of complex systems needs to encompass the verification of both hardware and embedded software modules, this book focuses on verification and debugging approaches for embedded software with hardware dependencies. Coverage includes the entire flow of design, verification and debugging of embedded software and all key approaches to debugging, dynamic, static, and hybrid verification. This book discusses the current, industrial embedded software verification flow, as well as emerging trends with focus on formal and hybrid verification and debugging approaches. Includes in a single source the entire flow of design, verification and debugging of embedded software; Addresses the main techniques that are currently being used in the industry for assuring the quality of embedded softw...

  18. Image-based fingerprint verification system using LabVIEW

    Directory of Open Access Journals (Sweden)

    Sunil K. Singla

    2008-09-01

    Full Text Available Biometric-based identification/verification systems provide a solution to the security concerns in the modern world where machine is replacing human in every aspect of life. Fingerprints, because of their uniqueness, are the most widely used and highly accepted biometrics. Fingerprint biometric systems are either minutiae-based or pattern learning (image based. The minutiae-based algorithm depends upon the local discontinuities in the ridge flow pattern and are used when template size is important while image-based matching algorithm uses both the micro and macro feature of a fingerprint and is used if fast response is required. In the present paper an image-based fingerprint verification system is discussed. The proposed method uses a learning phase, which is not present in conventional image-based systems. The learning phase uses pseudo random sub-sampling, which reduces the number of comparisons needed in the matching stage. This system has been developed using LabVIEW (Laboratory Virtual Instrument Engineering Workbench toolbox version 6i. The availability of datalog files in LabVIEW makes it one of the most promising candidates for its usage as a database. Datalog files can access and manipulate data and complex data structures quickly and easily. It makes writing and reading much faster. After extensive experimentation involving a large number of samples and different learning sizes, high accuracy with learning image size of 100 100 and a threshold value of 700 (1000 being the perfect match has been achieved.

  19. Validation and verification of MCNP6 against intermediate and high-energy experimental data and results by other codes

    International Nuclear Information System (INIS)

    Mashnik, Stepan G.

    2011-01-01

    MCNP6, the latest and most advanced LANL transport code representing a recent merger of MCNP5 and MCNPX, has been Validated and Verified (V and V) against a variety of intermediate and high-energy experimental data and against results by different versions of MCNPX and other codes. In the present work, we V and V MCNP6 using mainly the latest modifications of the Cascade-Exciton Model (CEM) and of the Los Alamos version of the Quark-Gluon String Model (LAQGSM) event generators CEM03.02 and LAQGSM03.03. We found that MCNP6 describes reasonably well various reactions induced by particles and nuclei at incident energies from 18 MeV to about 1 TeV per nucleon measured on thin and thick targets and agrees very well with similar results obtained with MCNPX and calculations by CEM03.02, LAQGSM03.01 (03.03), INCL4 + ABLA, and Bertini INC + Dresner evaporation, EPAX, ABRABLA, HIPSE, and AMD, used as stand alone codes. Most of several computational bugs and more serious physics problems observed in MCNP6/X during our V and V have been fixed; we continue our work to solve all the known problems before MCNP6 is distributed to the public. (author)

  20. Continuous assessment of land mapping accuracy at High Resolution from global networks of atmospheric and field observatories -concept and demonstration

    Science.gov (United States)

    Sicard, Pierre; Martin-lauzer, François-regis

    2017-04-01

    In the context of global climate change and adjustment/resilience policies' design and implementation, there is a need not only i. for environmental monitoring, e.g. through a range of Earth Observations (EO) land "products" but ii. for a precise assessment of uncertainties of the aforesaid information that feed environmental decision-making (to be introduced in the EO metadata) and also iii. for a perfect handing of the thresholds which help translate "environment tolerance limits" to match detected EO changes through ecosystem modelling. Uncertainties' insight means precision and accuracy's knowledge and subsequent ability of setting thresholds for change detection systems. Traditionally, the validation of satellite-derived products has taken the form of intensive field campaigns to sanction the introduction of data processors in Payload Data Ground Segments chains. It is marred by logistical challenges and cost issues, reason why it is complemented by specific surveys at ground-based monitoring sites which can provide near-continuous observations at a high temporal resolution (e.g. RadCalNet). Unfortunately, most of the ground-level monitoring sites, in the number of 100th or 1000th, which are part of wider observation networks (e.g. FLUXNET, NEON, IMAGINES) mainly monitor the state of the atmosphere and the radiation exchange at the surface, which are different to the products derived from EO data. In addition they are "point-based" compared to the EO cover to be obtained from Sentinel-2 or Sentinel-3. Yet, data from these networks, processed by spatial extrapolation models, are well-suited to the bottom-up approach and relevant to the validation of vegetation parameters' consistency (e.g. leaf area index, fraction of absorbed photosynthetically active radiation). Consistency means minimal errors on spatial and temporal gradients of EO products. Test of the procedure for land-cover products' consistency assessment with field measurements delivered by worldwide

  1. Improved accuracy of cortical bone mineralization measured by polychromatic microcomputed tomography using a novel high mineral density composite calibration phantom

    International Nuclear Information System (INIS)

    Deuerling, Justin M.; Rudy, David J.; Niebur, Glen L.; Roeder, Ryan K.

    2010-01-01

    Purpose: Microcomputed tomography (micro-CT) is increasingly used as a nondestructive alternative to ashing for measuring bone mineral content. Phantoms are utilized to calibrate the measured x-ray attenuation to discrete levels of mineral density, typically including levels up to 1000 mg HA/cm 3 , which encompasses levels of bone mineral density (BMD) observed in trabecular bone. However, levels of BMD observed in cortical bone and levels of tissue mineral density (TMD) in both cortical and trabecular bone typically exceed 1000 mg HA/cm 3 , requiring extrapolation of the calibration regression, which may result in error. Therefore, the objectives of this study were to investigate (1) the relationship between x-ray attenuation and an expanded range of hydroxyapatite (HA) density in a less attenuating polymer matrix and (2) the effects of the calibration on the accuracy of subsequent measurements of mineralization in human cortical bone specimens. Methods: A novel HA-polymer composite phantom was prepared comprising a less attenuating polymer phase (polyethylene) and an expanded range of HA density (0-1860 mg HA/cm 3 ) inclusive of characteristic levels of BMD in cortical bone or TMD in cortical and trabecular bone. The BMD and TMD of cortical bone specimens measured using the new HA-polymer calibration phantom were compared to measurements using a conventional HA-polymer phantom comprising 0-800 mg HA/cm 3 and the corresponding ash density measurements on the same specimens. Results: The HA-polymer composite phantom exhibited a nonlinear relationship between x-ray attenuation and HA density, rather than the linear relationship typically employed a priori, and obviated the need for extrapolation, when calibrating the measured x-ray attenuation to high levels of mineral density. The BMD and TMD of cortical bone specimens measured using the conventional phantom was significantly lower than the measured ash density by 19% (p<0.001, ANCOVA) and 33% (p<0.05, Tukey's HSD

  2. High-accuracy continuous airborne measurements of greenhouse gases (CO2 and CH4) using the cavity ring-down spectroscopy (CRDS) technique

    NARCIS (Netherlands)

    Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; Van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.

    2010-01-01

    High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balancao Atmosferico Regional de Carbono na Amazonia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This

  3. CATS Deliverable 5.1 : CATS verification of test matrix and protocol