WorldWideScience

Sample records for 10-16 level accuracy

  1. Dual cesium and rubidium atomic fountain with a 10-16 level accuracy and applications

    Atomic fountains are the most accomplished development of the atomic clocks based on the cesium atom whose hyperfine resonance defines the SI second since 1967. Today these systems are among those which realize the second with the best accuracy. We present the last developments of the cold cesium and rubidium atom dual fountain experiment at LNE-SYRTE. This unique dual setup would allow to obtain an outstanding resolution in fundamental physics tests based on atomic transition frequency comparisons. In order to enable operation with both atomic species simultaneously, we designed, tested and implemented on the fountain new collimators which combine the laser lights corresponding to each atom. By comparing our rubidium fountain to another cesium fountain over a decade, we performed a test of the stability of the fine structure constant at the level of 5 * 10-16 per year. We carried on the work on the clock accuracy and we focused on the phase gradients effects in the interrogation cavity and on the microwave leakage. The fountain accuracy has been evaluated to 4 * 10-16 for the cesium clock and to 5 * 10-16 for the refurbished rubidium clock. As a powerful instrument of metrology, our fountain was implicated in many clock comparisons and contributed many times to calibrate the International Atomic Time. Furthermore, we used the fountain to perform a new test of Lorentz local invariance. (author)

  2. Multi-Accuracy-Level Burning Plasma Simulations

    The design of a reactor grade tokamak is based on a hierarchy of tools. We present here three codes that are presently used for the simulations of burning plasmas. At the first level there is a 0-dimensional code that allows to choose a reasonable range of global parameters; in our case the HELIOS code was used for this task. For the second level we have developed a mixed 0-D / 1-D code called METIS that allows to study the main properties of a burning plasma, including profiles and all heat and current sources, but always under the constraint of energy and other empirical scaling laws. METIS is a fast code that permits to perform a large number of runs (a run takes about one minute) and design the main features of a scenario, or validate the results of the 0-D code on a full time evolution. At the top level, we used the full 1D1/2 suite of codes CRONOS that gives access to a detailed study of the plasma profiles evolution. CRONOS can use a variety of modules for source terms and transport coefficients computation with different level of complexity and accuracy: from simple estimators to highly sophisticated physics calculations. Thus it is possible to vary the accuracy of burning plasma simulations, as a trade-off with computation time. A wide range of scenario studies can thus be made with CRONOS and then validated with post-processing tools like MHD stability analysis. We will present in this paper results of this multi-level analysis applied to the ITER hybrid scenario. This specific example will illustrate the importance of having several tools for the study of burning plasma scenarios, especially in a domain that present devices cannot access experimentally. (Author)

  3. Convective Weather Forecast Accuracy Analysis at Center and Sector Levels

    Wang, Yao; Sridhar, Banavar

    2010-01-01

    This paper presents a detailed convective forecast accuracy analysis at center and sector levels. The study is aimed to provide more meaningful forecast verification measures to aviation community, as well as to obtain useful information leading to the improvements in the weather translation capacity models. In general, the vast majority of forecast verification efforts over past decades have been on the calculation of traditional standard verification measure scores over forecast and observation data analyses onto grids. These verification measures based on the binary classification have been applied in quality assurance of weather forecast products at the national level for many years. Our research focuses on the forecast at the center and sector levels. We calculate the standard forecast verification measure scores for en-route air traffic centers and sectors first, followed by conducting the forecast validation analysis and related verification measures for weather intensities and locations at centers and sectors levels. An approach to improve the prediction of sector weather coverage by multiple sector forecasts is then developed. The weather severe intensity assessment was carried out by using the correlations between forecast and actual weather observation airspace coverage. The weather forecast accuracy on horizontal location was assessed by examining the forecast errors. The improvement in prediction of weather coverage was determined by the correlation between actual sector weather coverage and prediction. observed and forecasted Convective Weather Avoidance Model (CWAM) data collected from June to September in 2007. CWAM zero-minute forecast data with aircraft avoidance probability of 60% and 80% are used as the actual weather observation. All forecast measurements are based on 30-minute, 60- minute, 90-minute, and 120-minute forecasts with the same avoidance probabilities. The forecast accuracy analysis for times under one-hour showed that the errors in

  4. 24 CFR 10.16 - Adoption of a final rule.

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Adoption of a final rule. 10.16 Section 10.16 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development RULEMAKING: POLICY AND PROCEDURES Procedures § 10.16 Adoption of a final rule. All timely...

  5. Anthropometric, physical and cardiorespiratory fitness of 10-16 years children

    Indranil Manna; Swadesh Ranjan Pan; Mohua Chowdhury

    2014-01-01

    Objectives: The present study was undertaken to investigate the anthropometric, physical and cardiorespiratiory fitness of 10-16 yrs children. Background: Talent identification in sports is importance because they represent the achievement level of a particular group in future. There are very limited studies available in Indian context on talent identification in sports. Method: A total of 150 male children of 10-16 yrs age volunteered for this study; were divided equally into 3 groups (i) Pr...

  6. The Carmichael numbers up to $10^{16}$

    Pinch, Richard G. E.

    1998-01-01

    We extend our previous computations to show that there are 246683 Carmichael numbers up to $10^{16}$. As before, the numbers were generated by a back-tracking search for possible prime factorisations together with a ``large prime variation''. We present further statistics on the distribution of Carmichael numbers.

  7. Static beacons based indoor positioning method for improving room-level accuracy

    Miekk-oja, Ville

    2015-01-01

    Demand for indoor positioning applications has been growing lately. Indoor positioning is used for example in hospitals for patient tracking, and in airports for finding correct gates. Requirements in indoor positioning have become more strict with demands for a higher accuracy. This thesis presents a method for improving the room-level accuracy of a positioning system by using static beacons. As a static beacon, Bluetooth low energy modules will be used to test how much they can improve...

  8. Acceptance and Accuracy of Multiple Choice, Confidence-Level, and Essay Question Formats for Graduate Students

    Swartz, Stephen M.

    2006-01-01

    The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…

  9. Study the effect of gray component replacement level on reflectance spectra and color reproduction accuracy

    Spiridonov, I.; Shopova, M.; Boeva, R.

    2013-03-01

    The aim of this study is investigation of gray component replacement (GCR) levels on reflectance spectrum for different overprints of the inks and color reproduction accuracy. The most commonly implemented method in practice for generation of achromatic composition is gray component replacement (GCR). The experiments in this study, have been performed in real production conditions with special test form generated by specialized software. The measuring of reflection spectrum of printed colors, gives a complete conception for the effect of different gray component replacement levels on color reproduction accuracy. For better data analyses and modeling of processes, we have calculated (converted) the CIEL*a*b* color coordinates from the reflection spectra data. The assessment of color accuracy by using different GCR amount has been made by calculation of color difference ΔE* ab. In addition for the specific printing conditions we have created ICC profiles with different GCR amounts. A comparison of the color gamuts has been performed. For a first time a methodology is implemented for examination and estimation of effect of GCR levels on color reproduction accuracy by studying a big number of colors in entire visible spectrum. Implementation in practice of the results achieved in this experiment, will lead to improved gray balance and better color accuracy. Another important effect of this research is reduction of financial costs of printing production by decreasing of ink consumption, indirect reduction of emissions during the manufacture of inks and facilitates the process of deinking during the recycling paper.

  10. The Influence of Overt Practice, Achievement Level, and Explanatory Style on Calibration Accuracy and Performance

    Bol, Linda; Hacker, Douglas J.; O'Shea, Patrick; Allen, Dwight

    2005-01-01

    The authors measured the influence of overt calibration practice, achievement level, and explanatory style on calibration accuracy and exam performance. Students (N = 356) were randomly assigned to either an overt practice or no-practice condition. Students in the overt practice condition made predictions and postdictions about their performance…

  11. Accuracy enhancement of the spherical actuator with a two-level geometric calibration method

    Zhang Liang; Chen Weihai; Liu Jingmeng; Wu Xingming; Chen I-Ming

    2014-01-01

    This paper presents a two-level geometric calibration method for the permanent magnet (PM) spherical actuator to improve its motion control accuracy. The proposed actuator is composed of a stator with circumferential coils and a rotor with multiple PM poles. Due to the assembly and fabrication errors, the real geometric parameters of the actuator will deviate from their design values. Hence, the identification of such errors is critical for the motion control tasks. A two-level geometric cali...

  12. Fluency and accuracy levels in writing of Grade 12 ESL learners

    Johann L Van der Walt

    2011-08-01

    Full Text Available This study investigated two aspects of the level of second language development achieved by Grade 12 English Second Language (ESL learners in South Africa. It was inspired by the general concern about standards in the matriculation examination and calls for the improvement of ESL teaching and learning. The study involved an investigation and description of the fluency and accuracy levels of Grade 12 learners. We focussed on writing, since it is generally accepted that characteristic patterns of advanced learners are best studied in written production. 216 compositions were analysed in terms of T-units, and fluency and accuracy frequencies and ratios were calculated. Results show that fluency ratios (W/T and W/EFT and an accuracy ratio (EFT/T paint a poor picture of learners’ performance in writing, and suggest that Grade 12 ESL learners are ill-prepared for tertiary study. Better control of morphology and syntax is required, as this will lead to a general improvement of fluency and accuracy levels in ESL.

  13. Static and dynamic modelling of liquid level sensor with high accuracy

    Fock, K. [Budapest Univ. of Technology and Economics, Budapest (Hungary). Dept. of Control Engineering and Information Technology; Fock, B. [Dept. of Measurement and Information Systems, Budapest Univ. of Technology and Economics, Budapest (Hungary)

    2001-07-01

    This category of continous level sensors is related to the float type. An angular-position transducer is used to indicate the number of turns of a dram as a plump line wound on the drum, is unwound until a weight (of gramble solids) or a float (for fluids) touch the surface. When this occurs the plump line loses tension, a tension sensor (force transducer) detects the loss in tension and sends signal to a direction-changing device, that controls a drum drive motor. Beyond the question of point or continous level control, operating variable play a major role in determing accuracy and repeatibility requirements. The paper contains the dynamic analysis and the identification of the sensor system to increase the static and dynamic accuracy. (orig.)

  14. Accuracy enhancement of the spherical actuator with a two-level geometric calibration method

    Zhang Liang

    2014-04-01

    Full Text Available This paper presents a two-level geometric calibration method for the permanent magnet (PM spherical actuator to improve its motion control accuracy. The proposed actuator is composed of a stator with circumferential coils and a rotor with multiple PM poles. Due to the assembly and fabrication errors, the real geometric parameters of the actuator will deviate from their design values. Hence, the identification of such errors is critical for the motion control tasks. A two-level geometric calibration approach is proposed to identify such errors. In the first level, the calibration model is formulated based on the differential form of the kinematic equation, which is to identify the geometric errors in the spherical joint. In the second level, the calibration model is formulated based on the differential form of torque formula, which is to calibrate the geometric parameters of the magnetization axes of PM poles and coils axes. To demonstrate the robustness and availability of the calibration algorithm, simulations are conducted. The results have shown that the proposed two-level calibration method can effectively compensate the geometric parameter errors and improve the positioning accuracy of the spherical actuator.

  15. Diagnostic accuracy at several reduced radiation dose levels for CT imaging in the diagnosis of appendicitis

    Zhang, Di; Khatonabadi, Maryam; Kim, Hyun; Jude, Matilda; Zaragoza, Edward; Lee, Margaret; Patel, Maitraya; Poon, Cheryce; Douek, Michael; Andrews-Tang, Denise; Doepke, Laura; McNitt-Gray, Shawn; Cagnon, Chris; DeMarco, John; McNitt-Gray, Michael

    2012-03-01

    Purpose: While several studies have investigated the tradeoffs between radiation dose and image quality (noise) in CT imaging, the purpose of this study was to take this analysis a step further by investigating the tradeoffs between patient radiation dose (including organ dose) and diagnostic accuracy in diagnosis of appendicitis using CT. Methods: This study was IRB approved and utilized data from 20 patients who underwent clinical CT exams for indications of appendicitis. Medical record review established true diagnosis of appendicitis, with 10 positives and 10 negatives. A validated software tool used raw projection data from each scan to create simulated images at lower dose levels (70%, 50%, 30%, 20% of original). An observer study was performed with 6 radiologists reviewing each case at each dose level in random order over several sessions. Readers assessed image quality and provided confidence in their diagnosis of appendicitis, each on a 5 point scale. Liver doses at each case and each dose level were estimated using Monte Carlo simulation based methods. Results: Overall diagnostic accuracy varies across dose levels: 92%, 93%, 91%, 90% and 90% across the 100%, 70%, 50%, 30% and 20% dose levels respectively. And it is 93%, 95%, 88%, 90% and 90% across the 13.5-22mGy, 9.6-13.5mGy, 6.4-9.6mGy, 4-6.4mGy, and 2-4mGy liver dose ranges respectively. Only 4 out of 600 observations were rated "unacceptable" for image quality. Conclusion: The results from this pilot study indicate that the diagnostic accuracy does not change dramatically even at significantly reduced radiation dose.

  16. Dissolution Of 3013-DE Sample 10-16

    The HB-Line Facility has a long-term mission to dissolve and disposition legacy fissile materials. HB-Line dissolves plutonium dioxide (PuO2) from K-Area parting support of the 3013 Destructive Examination (DE) program. The PuO2-bearing solids originate from a variety of unit operations and processing facilities, but all of the material is assumed to be high-fired (i.e., calcined in air for a minimum of two hours at (ge) 750 C). The Savannah River National Laboratory (SRNL) conducted dissolution flowsheet studies on 3013 DE Sample 10-16 (can R610826), which contains weapons-grade plutonium (Pu) as the fissile material. The dissolution flowsheet study was performed for 4 hours at 108 C on unwashed material using 12 M nitric acid (HNO3) containing 0.20 M potassium fluoride (KF). After 4 hours at 108 C, the 239Pu Equivalent concentration was 32.5 g/L (gamma, 5.0% uncertainty). The insoluble residue comprised 9.88 wt % of the initial bulk weight, and contained 5.31-5.95 wt % of the initial Pu. The residue contained Pu in the highest concentration, followed by tungsten (W). Analyses detected 2,770 mg/L chloride (Cl-) in the final dissolver solution (3.28 wt %), which is significantly lower than the amount of Cl- detected by prompt gamma (9.86 wt %) and the 3013 DE Surveillance program (14.7 wt %). A low bias in chloride measurement is anticipated due to volatilization during the experiment. Gas generation studies found approximately 60 mL of gas per gram of sample produced during the first 30 minutes of dissolution. Little to no gas was produced after the first 30 minutes. Hydrogen gas (H2) was not detected in the sample. Based on detection limits and accounting for dilution, the generated gas contained 2, which is well below the 4.0 vol % flammability limit for H2 in air. Filtration of the dissolver solution occurred readily. When aluminum nitrate nonahydrate (ANN) was added to the filtered dissolver solution at a 3:1 Al:F molar ratio, and stored at room temperature

  17. Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy

    Edwards, Lawrence G.; Haberbusch, Mark

    1993-01-01

    The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.

  18. DISSOLUTION OF 3013-DE SAMPLE 10-16

    Taylor-Pashow, K.

    2011-05-24

    The HB-Line Facility has a long-term mission to dissolve and disposition legacy fissile materials. HB-Line dissolves plutonium dioxide (PuO{sub 2}) from K-Area parting support of the 3013 Destructive Examination (DE) program. The PuO{sub 2}-bearing solids originate from a variety of unit operations and processing facilities, but all of the material is assumed to be high-fired (i.e., calcined in air for a minimum of two hours at {ge} 750 C). The Savannah River National Laboratory (SRNL) conducted dissolution flowsheet studies on 3013 DE Sample 10-16 (can R610826), which contains weapons-grade plutonium (Pu) as the fissile material. The dissolution flowsheet study was performed for 4 hours at 108 C on unwashed material using 12 M nitric acid (HNO{sub 3}) containing 0.20 M potassium fluoride (KF). After 4 hours at 108 C, the {sup 239}Pu Equivalent concentration was 32.5 g/L (gamma, 5.0% uncertainty). The insoluble residue comprised 9.88 wt % of the initial bulk weight, and contained 5.31-5.95 wt % of the initial Pu. The residue contained Pu in the highest concentration, followed by tungsten (W). Analyses detected 2,770 mg/L chloride (Cl{sup -}) in the final dissolver solution (3.28 wt %), which is significantly lower than the amount of Cl{sup -} detected by prompt gamma (9.86 wt %) and the 3013 DE Surveillance program (14.7 wt %). A low bias in chloride measurement is anticipated due to volatilization during the experiment. Gas generation studies found approximately 60 mL of gas per gram of sample produced during the first 30 minutes of dissolution. Little to no gas was produced after the first 30 minutes. Hydrogen gas (H{sub 2}) was not detected in the sample. Based on detection limits and accounting for dilution, the generated gas contained < 0.12 vol % H{sub 2}, which is well below the 4.0 vol % flammability limit for H{sub 2} in air. Filtration of the dissolver solution occurred readily. When aluminum nitrate nonahydrate (ANN) was added to the filtered dissolver

  19. Accuracy of Self-Reported College GPA: Gender-Moderated Differences by Achievement Level and Academic Self-Efficacy

    Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.

    2014-01-01

    Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…

  20. Accuracy of different abutment level impression techniques in All-On-4 dental implants

    Marzieh Alikhasi

    2012-01-01

    Full Text Available Background and Aims: Passive fit of prosthetic frameworks is a major concern in implant dentistry. Impression technique is one of the several variables that may affect the outcome of dental implants. The purpose of this study was to compare the three dimensional accuracy of direct and indirect abutment level implant impressions ofALL-ON-4 treatment plan.Materials and Methods: A reference acrylic resin model with four Branemark fixtures was made according to All-On-4 treatment plan. Multiunit abutments were screwed into the fixtures and two special trays were made for direct and indirect impression techniques. Ten direct and ten indirect impression techniques with respective impression transfers were made. Impressions were poured with stone and the positional accuracy of the abutment analogues in each dimension of x, y, and z axes and also angular displacement (Δθ were evaluated using a Coordinate Measuring Machine (CMM. Data were analyzed using T- test.Results: The results showed that direct impression technique was significantly more accurate than indirect technique (P<0.001.Conclusion: The results showed that the accuracy of direct impression technique was significantly more than that of indirect technique in Δθ and Δr coordinate and also Δx, Δy, Δz.

  1. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  2. Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing

    The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings

  3. Soft mean spherical approximation for dusty plasma liquids: Level of accuracy and analytic expressions

    Tolias, P. [Space and Plasma Physics, Royal Institute of Technology, Stockholm SE-100 44 (Sweden); Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Napoli, Naples 80126 (Italy); Ratynskaia, S. [Space and Plasma Physics, Royal Institute of Technology, Stockholm SE-100 44 (Sweden); Angelis, U. de [Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Napoli, Naples 80126 (Italy)

    2015-08-15

    The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.

  4. Soft mean spherical approximation for dusty plasma liquids: Level of accuracy and analytic expressions

    The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities

  5. Accuracy assessment of airphoto interpretation of vegetation types and disturance levels on winter seismic trails, Arctic National Wildlife Refuge, Alaska

    US Fish and Wildlife Service, Department of the Interior — An accuracy assessment was conducted to evaluate the photointerpretation of vegetation types and disturbance levels along seismic trails in the Arctic National...

  6. Accuracy assessment of airphoto interpretation of vegetation types and disturance levels on winter seismic trails, Arctic National Wildlife Refuge, Alaska

    US Fish and Wildlife Service, Department of the Interior — An accuracy assessment was conducted to evaluate the photo-interpretation of vegetation types and disturbance levels along seismic trails in the Arctic National...

  7. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen M.; Tucker, Garritt J. (Drexel University)

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled %22Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations.%22 During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel

  8. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    Baker, J.E.

    1994-09-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different {open_quotes}realities{close_quotes} lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques attempt to resolve some of these ambiguities by appropriately coupling complementary images to eliminate possible inverse mappings. What constitutes the best MSI technique is dependent on the given application domain, available sensors, and task requirements. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) {open_quotes}detail enhancement,{close_quotes} wherein the relative information content of the original images is less rich than the desired representation; (2) {open_quotes}data enhancement,{close_quotes} wherein the MSI techniques are concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) {open_quotes}conceptual enhancement,{close_quotes} wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail.

  9. Accuracy Assessments of ATMS Upper-Level Temperature Sounding Channels Using COSMIC RO Data

    Lin, L.; Weng, F.; Zou, X.

    2012-12-01

    The Advanced Technology Microwave Sounder (ATMS) on board Suomi National Polar-orbiting Partnership (NPP) is a 22-channel passive microwave radiometer that can provide high-spatial-resolution data for generating temperature and moisture soundings in cloudy conditions. Global Positioning System (GPS) radio occultation (RO) data have high vertical resolution, are not affected by clouds, and are most accurate from 8 to 30 km, making them ideally suited for estimating the precision of ATMS measurements for upper level temperature sounding channels. In this study, Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO data are collocated with ATMS observation from December 10, 2011 to June 30, 2012. Compared with GPS simulations using the U.S. Joint Center of Satellite Data Assimilation (JCSDA) Community Radiative Transfer Model (CRTM), the global biases of brightness temperatures from ATMS measurements are within 0.5K for channels 6 to 13 for clear-sky data over ocean. This value is well within the pre-launch specification, indicating that the ATMS upper level temperature sounding channels have high accuracy. The monthly variation and angular dependence of ATMS bias are also examined.

  10. Screening accuracy of Level 2 autism spectrum disorder rating scales. A review of selected instruments.

    Norris, Megan; Lecavalier, Luc

    2010-07-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be ASD-specific, and have published evidence of diagnostic validity in peer-reviewed journals. The five scales reviewed were: the Social Communication Questionnaire (SCQ), Gilliam Autism Rating Scale/Gilliam Autism Rating Scale-Second Edition (GARS/GARS-2), Social Responsiveness Scale (SRS), Autism Spectrum Screening Questionnaire (ASSQ), and Asperger Syndrome Diagnostic Scale (ASDS). Twenty total studies were located, most examining the SCQ. Research on the other scales was limited. Comparisons between scales were few and available evidence of diagnostic validity is scarce for certain subpopulations (e.g., lower functioning individuals, PDDNOS). Overall, the SCQ performed well, the SRS and ASSQ showed promise, and the GARS/GARS-2 and ASDS demonstrated poor sensitivity. This review indicates that Level 2 ASD caregiver-completed rating scales are in need of much more scientific scrutiny. PMID:20591956

  11. Education Level Predicts Retrospective Metamemory Accuracy in Healthy Aging and Alzheimer’s Disease

    Szajer, Jacquelyn; Murphy, Claire

    2013-01-01

    The current study investigated the effect of education on retrospective metamemory accuracy in 143 healthy older adults and 143 early to moderate AD patients, using retrospective measures of confidence in the accuracy of retrieval responses in an episodic odor recognition memory task. Relative confidence accuracy was computed as the difference between confidence judgments for correct and incorrect responses. In both AD patients and controls, individuals reporting 17 years of education or more...

  12. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ''detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) ''data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ''conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features

  13. Experimental Investigation of Liquid-Level Measuring Accuracy in a Low Pressure Environment

    Dip Tubes which are used for determining liquid level in many processes at SRS will be used to measure the liquid level of the Am/Cm solution in the Feed Tank at the MPPF. The Feed Tank operates under a vacuum, therefore the Dip Tubes will operate under a vacuum. Uncertainty in how accurate the Dip Tubes would perform in a vacuum environment led to testing. The Am/Cm Melter Liquid-Feed Tank measurement test was mocked-up per Figure 1. The Feed Tank was designed to simulate actual conditions in which the Dip Tubes would measure the differential pressure. The Feed Tank was made of Stainless Steel with a Lexan window to view inside the tank during testing. The Feed Tank was built per Drawing SRT-ETF-DD-96008, Revision A. The accuracy of the Dip Tubes was checked first by filling the Feed Tank at a flow rate of 3.5 L/min and venting it to the atmosphere. Figure 2 shows that the Dip Tubes were responsive and accurate when compared to the data from the measuring scale on the view window. Then tests were conducted with 23y Hg vacuum inside the tank and water flow rates of 3.9 L/min, 1.8 L/min, and 0.7 L/min being fed to the tank. The data from each test are depicted in Figure 3, Figure 4, and Figure 5, respectively. The Dip Tubes responded accurately for the three test with a maximum error range of +0.31y to -0.19y when compared to the measuring scale located next to the view window on the Feed Tank

  14. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh;

    2013-01-01

    Bluetooth sensors have a large detection zone compared to other static Vehicle Re-Identification Systems (VRIS). Although a larger detection zone increases the probability of detecting a Bluetooth-enabled device in a fast-moving vehicle, it increases the probability of multiple detection events...... triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...

  15. Accuracy of low-level tritium measurements in water samples by liquid scintillation method

    Quenching is always present in water samples and the degree of quenching can vary from one sample to another even within the same batch. This means that quench correction should be carried out for each sample in order to determine the activity so that comparisons can be made between samples and other batches. A comparative study of tritium measurements between two methods used to correct for quenching is presented in this paper. The methods used to determine counting efficiency in the presence of quenching are as follow: Spectral Quench Parameter of External standard method (SQP(E)) and Internal Standard Method (ISM). In this work, a low background liquid scintillation system detector (Quantulus 1220) is used to determine tritium activity concentration in heavy water with different concentrations from 99.66 D/H+D% to 1.65 D/D+H %. Standard calibration curve for the SQP(E) technique was carried out with 3 H low level quenched PACKARD standard set that had an assayed value of 29240 dpm/std ± 1.6%. Quench correction for Internal Standard Method was made for each sample of heavy water with Tritiated Water Internal Standard that had a tritium concentration of 2.51 x 106 dpm/g ± 3.0%. A comparison between dilution factors calculated for D/D + H% concentration and dilution factors calculated for tritium activity measured by the two methods, is presented in the paper. Internal Standard Method provides accuracy results especially for lower D/D+H% concentration, which are similar with environmental sample. Commercial standards set do not fulfill the requirements of an accurate environmental tritium measurement. One must take into account also the following circumstances: type of vial, type of scintillation cocktail, filled volume and counting geometry. Even if one can make its own standard set for quenching calibration one must cope also with another problem. The Compton electrons produced by external standard are beta energetic particles in the sample itself. Hence a

  16. The First Comprehensive Accuracy Assessment of GlobeLand30 at a National Level: Methodology and Results

    Maria Antonia Brovelli

    2015-04-01

    Full Text Available As result of the “Global Land Cover Mapping at Finer Resolution” project led by National Geomatics Center of China (NGCC, one of the first global land cover datasets at 30-meters resolution (GlobeLand30 has been produced for the years 2000 and 2010. The first comprehensive accuracy assessment at a national level of these data (excluding some comparisons in China has been performed on the Italian area by means of a benchmarking with the more detailed land cover datasets available for some Italian regions. The accuracy evaluation was based on the cell-by-cell comparison between Italian maps and the GlobeLand30 in order to obtain the confusion matrix and its derived statistics (overall accuracy, allocation and quantity disagreements, user and producer accuracy, which help to understand the classification quality. This paper illustrates the adopted methodology and procedures for assessing GlobeLand30 and reports the obtained statistics. The analysis has been performed in eight regions across Italy and shows very good results: the comparison of the datasets according to the first level of Corine Land Cover nomenclature highlights overall accuracy values generally higher than 80%.

  17. On the Accuracy of Iranian EFL Students' Reading Self-assessment and their Level of Reading Proficiency

    Moein Shokr

    2015-01-01

    Reviewing the literature on self-assessment as an alternative method of assessment we find advocates claiming for the accuracy of the students’ self-assessments in general with little focus on their level of proficiency. With an eye on the students’ level of reading proficiency, the present study aimed at investigating the relationship between students’ reading self-assessment (as a formative and alternative method of assessment) on the one hand, and teacher assessment (as a formative type of...

  18. Accuracy of student performance while reading leveled books rated at their instructional level by a reading inventory.

    Burns, Matthew K; Pulles, Sandra M; Maki, Kathrin E; Kanive, Rebecca; Hodgson, Jennifer; Helman, Lori A; McComas, Jennifer J; Preast, June L

    2015-12-01

    Identifying a student's instructional level is necessary to ensure that students are appropriately challenged in reading. Informal reading inventories (IRIs) purport to assess the highest reading level at which a student can accurately decode and comprehend text. However, the use of IRIs in determining a student's instructional level has been questioned because of a lack of research. The current study examined the percentage of words read correctly with 64 second- and third-grade students while reading from texts at their instructional level as determined by an IRI. Students read for 1 min from three leveled texts that corresponded to their instructional level as measured by an IRI, and the percentage of words read correctly was recorded. The percentage read correctly correlated across the three books from r=.47 to r=.68 and instructional level categories correlated from tau=.59 to tau=.65. Percent agreement calculations showed that the categorical scores (frustration, instructional, and independent) for the three readings agreed approximately 67% to 70% of the time, which resulted in a kappa estimate of less than .50. Kappa coefficients of .70 are considered strong indicators of agreement. Moreover, more than half of the students with the lowest reading skills read at a frustration level when attempting to read books rated at their instructional level by an IRI. The current study questions how reliably and accurately IRIs identify students' instructional level for reading. PMID:26563597

  19. Blood CEA levels for detecting recurrent colorectal cancer: A Diagnostic Test Accuracy Review.

    Nicholson, BD; Shinkins, B.; Pathiraja, I; Roberts, NW; James, T; Mallett, S.; Perera, R; Primrose, JN; Mant, D

    2015-01-01

    Background Testing for carcino-embryonic antigen (CEA) in the blood is a recommended part of follow-up to detect recurrence of colorectal cancer following primary curative treatment. There is substantial clinical variation in the cut-off level applied to trigger further investigation. Objectives To determine the diagnostic performance of different blood CEA levels in identifying people with colorectal cancer recurrence in order to inform clinical practice. Search methods W...

  20. High-level politically connected firms, corruption, and analyst forecast accuracy around the world

    Charles JP Chen; Yuan Ding; Chansog (Francis) Kim

    2010-01-01

    The international business (IB) literature has widely recognized political forces as major factors that complicate the strategic decisions of multinational enterprises (MNEs). Analyses by financial intermediaries can help to reduce the risk of information asymmetry caused by such factors. Using firm-level data from 17 jurisdictions between 1997 and 2001, this study investigates the association between a firm's high-level political connections and earnings forecasts made by financial analysts,...

  1. 46 CFR 31.10-16 - Inspection and certification of cargo gear-TB/ALL.

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Inspection and certification of cargo gear-TB/ALL. 31.10... CERTIFICATION Inspections § 31.10-16 Inspection and certification of cargo gear—TB/ALL. (a) The owner, operator... treatment of chains, rings, hooks, shackles, and swivels which require such treatment; and,...

  2. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Guanyi Sun; Shengnan Xu; Xu Wang; Dawei Wang; Eugene Tang; Yangdong Deng; Sun Chan

    2011-01-01

    Today's System-on-Chips (SoCs) design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simu...

  3. Determination of accuracy level elements of Co, Ni, Cr, Cu in SRM using the multi element method by AAS

    In nature, all elements are associated with the other element forming minerals in rocks. Minerals are formed by element with similar characteristics and inclined existence together. The element analysis of the rock samples there is matrix element (the other domineer element with similar characteristics) it can influent the result analysis so that cause deviation. The multi element method is the standard in surpassing deviation caused by matrix element influences. The multi element method is the one of standard method which used in Sub Division of Geochemical Exploration, that never be done any test the result analysis before. The research aim to test the accuracy level of the result analysis for multi element method for Co, Ni, Cr, Cu elements in six samples of SRM with AAS. The result analysis of multi element method by Atomic Absorption Spectroscopy, got the level accuracy about Co 89,98%-92,93%, Ni 90,13%-92,82%, Cr 89,71-95,38%, Cu 90,09%-92,82% with average deviation is <10%. Testing results of the analysis with student t Co element, Ni, Cr there is no significant difference / identical to SRM with 95% confidence level and Cu there are significant differences with the SRM 99% confidence level, the low deviations prove that the materials, equipment, and human resources, in a good condition fit for use in element analysis. (author)

  4. The Effect of Coded and Uncoded Written Corrective Feedback on the Accuracy of Learners Writing in Pre-intermediate Level

    Asghar Salimi

    2015-05-01

    Full Text Available To date, conflict exists in the literature on whether or not and how teachers should react to EFL learners' written grammar errors. Coded versus uncoded corrective feedback has been one of the rarely explored areas of investigation in SLA. To shed light on the factors that may explain such conflicting results, this study investigated the effect of coded and un-coded written corrective feedback with regard to possible improvements in the accuracy in writing of pre-intermediate EFL learners. It, further, sought whether such an effect would last in the long run. In the course of 14 weeks, learners’ errors in 2 groups (i.e., coded and uncoded were reacted. A paired-samples t-test was run to analyze the obtained data. Analysis of the written pieces in the immediate post-test and delayed post test revealed that coded corrective feedback, compared to the uncoded group, had a significantly more positive influence on learners' accuracy improvement both in the short and in the long run. The findings imply that teachers should weigh the learners' abilities and interlanguage, proficiency level, and type of error before applying different feedback types. Moreover, the implications are discussed in terms of effective guidelines for teaching writing in EFL contexts.Keywords: written feedback, accuracy, EFL context

  5. Geometric Accuracy Investigations of SEVIRI High Resolution Visible (HRV Level 1.5 Imagery

    Sultan Kocaman Aksakal

    2013-05-01

    Full Text Available GCOS (Global Climate Observing System is a long-term program for monitoring the climate, detecting the changes, and assessing their impacts. Remote sensing techniques are being increasingly used for climate-related measurements. Imagery of the SEVIRI instrument on board of the European geostationary satellites Meteosat-8 and Meteosat-9 are often used for the estimation of essential climate variables. In a joint project between the Swiss GCOS Office and ETH Zurich, geometric accuracy and temporal stability of 1-km resolution HRV channel imagery of SEVIRI have been evaluated over Switzerland. A set of tools and algorithms has been developed for the investigations. Statistical analysis and blunder detection have been integrated in the process for robust evaluation. The relative accuracy is evaluated by tracking large numbers of feature points in consecutive HRV images taken at 15-minute intervals. For the absolute accuracy evaluation, lakes in Switzerland and surroundings are used as reference. 20 lakes digitized from Landsat orthophotos are transformed into HRV images and matched via 2D translation terms at sub-pixel level. The algorithms are tested using HRV images taken on 24 days in 2008 (2 days per month. The results show that 2D shifts that are up to 8 pixels are present both in relative and absolute terms.

  6. On the Accuracy of Iranian EFL Students' Reading Self-assessment and their Level of Reading Proficiency

    Moein Shokr

    2015-10-01

    Full Text Available Reviewing the literature on self-assessment as an alternative method of assessment we find advocates claiming for the accuracy of the students’ self-assessments in general with little focus on their level of proficiency. With an eye on the students’ level of reading proficiency, the present study aimed at investigating the relationship between students’ reading self-assessment (as a formative and alternative method of assessment on the one hand, and teacher assessment (as a formative type of assessment as well as students’ final examination scores (as a summative and traditional method of assessment on the other. To this end, 65 students of Islamic Azad University- Tehran South Branch were selected to participate in this study. Initially, participants received PET test as pretest for assigning them into different levels of reading proficiency. Based upon the results of the pretest, participants were assigned to elementary and intermediate levels. Throughout the whole semester self-assessment questionnaire was employed for five times. Descriptive statistics and Pearson correlation were the data analysis techniques performed. The results of the study revealed a significant relationship between the intermediate learners’ self-ratings and teacher assessments; however, the results indicated no significant relationship between elementary learners’ self-assessments and teacher assessments. Also, the correlations between students’ self-assessments and their final examination scores were not significant for both levels. Therefore, given the teacher assessment as the yardstick, the accuracy of the intermediate levels and the inaccuracy of the elementary learners’ self-assessments could be concluded. Finally, the low correlation between the learners’ self-assessments and their scores on traditional final examination led the researcher to attribute it to the different nature of these two assessment types.

  7. Accuracy of a 2-level scheme based on a subgroup method for pressurized water reactor fuel assembly models

    Highlights: • A 2-level computational scheme is developed and implemented in the DRAGON5 lattice code. • The first level is using a self-shielding method based on the Subgroup Projection Method with 295 energy groups. • A SALOME-generated geometry is used for the second level. • The neutron flux of the second level is obtained using the Method of Characteristics with 26 energy groups. • Zero-burnup and depletion-dependent validation is made with respect to Monte Carlo code SERPENT2. - Abstract: Until now, a typical computational scheme for the DRAGON5 lattice code was based on a resonance self-shielding method using the Subgroup Projection Method (SPM) coupled with a flux calculation using the Method of Characteristics (MOC), both solved over a 295-group Santamarina–Hfaiedh energy mesh (SHEM). We are investigating the accuracy of an optimized 2-level computational scheme based on a condensation stage from 295 to 26 energy groups. A first level flux calculation is performed using the Interface Current (IC) method on the 295-group mesh, followed by a detailed second level flux calculation using the MOC on the 26-group mesh. Here, we validate the 2-level scheme by comparison with the 1-level scheme and with Monte Carlo calculations at burnup 0 and with isotopic depletion. Validation results were obtained using Monte Carlo codes SERPENT2 and TRIPOLI4. This study shows that an optimized 2-level scheme is much faster than the corresponding 1-level scheme and leads to numerical results without a significant degradation in term of precision. The proposed 2-level schemes are therefore candidate for CPU-efficient production tools for generating multi-parameter reactor databases

  8. Investigation of factors influencing the accuracy of uranium enrichment level determination by multi- analysis method (MGAU)

    Statistical characteristics of multi-group analysis technique for isotopic composition determination of uranium samples, realised in the form of portable 'U-Pu InSpector' spectrometry system and MGAU V.1.0 code, have been studied with the help of uranium isotopic standard reference materials in the range 0.32 - 4.5 % of 235U and 0.004 - 0.036 % of 234U concentration. The influence of a number of factors that cause systematic bias of measured values are also studied. Obtained results reveal dependence of measured 235U enrichment level upon isotopic composition of uranium sample and geometry of the measurement. Systematic underestimation of measured 234U content turned out to be of about 23 % in the concentration range. Possible sources of revealed systematic biases are discussed as well as some recommendations for improving of MGAU code are given

  9. Estimation of background noise level on seismic station using statistical analysis for improved analysis accuracy

    Han, S. M.; Hahm, I.

    2015-12-01

    We evaluated the background noise level of seismic stations in order to collect the observation data of high quality and produce accurate seismic information. Determining of the background noise level was used PSD (Power Spectral Density) method by McNamara and Buland (2004) in this study. This method that used long-term data is influenced by not only innate electronic noise of sensor and a pulse wave resulting from stabilizing but also missing data and controlled by the specified frequency which is affected by the irregular signals without site characteristics. It is hard and inefficient to implement process that filters out the abnormal signal within the automated system. To solve these problems, we devised a method for extracting the data which normally distributed with 90 to 99% confidence intervals at each period. The availability of the method was verified using 62-seismic stations with broadband and short-period sensors operated by the KMA (Korea Meteorological Administration). Evaluation standards were NHNM (New High Noise Model) and NLNM (New Low Noise Model) published by the USGS (United States Geological Survey). It was designed based on the western United States. However, Korean Peninsula surrounded by the ocean on three sides has a complicated geological structure and a high population density. So, we re-designed an appropriate model in Korean peninsula by statistically combined result. The important feature is that secondary-microseism peak appeared at a higher frequency band. Acknowledgements: This research was carried out as a part of "Research for the Meteorological and Earthquake Observation Technology and Its Application" supported by the 2015 National Institute of Meteorological Research (NIMR) in the Korea Meteorological Administration.

  10. Primordial black holes with mass $10^{16}-10^{17}$ g and reionization of the Universe

    Belotsky, K M

    2014-01-01

    Primordial black holes (PBHs) with mass $10^{16}-10^{17}$ g almost escape constraints from observations so could essentially contribute to dark matter density. Hawking evaporation of such PBHs produces with a steady rate $\\gamma$- and $e^{\\pm}$-radiations in MeV energy range, which can be absorbed by ordinary matter. Simplified estimates show that a small fraction of evaporated energy had to be absorbed by baryonic matter what can turn out to be enough to heat the matter so it is fully ionized at the redshift $z\\sim 5\\ldots 10$. The result is found to be close to a borderline case where the effect appears, what makes it sensitive to the approximation used. In our approximation, degree of gas ionization reaches 50-100\\% by $z\\sim 5$ for PBH mass $(3\\ldots7)\\times 10^{16}$ g with their abundance corresponding to the upper limit.

  11. Model-based correction algorithms improving the accuracy of hydrostatic level measurement on pressure vessels

    It is important to possess precise process information for an optimised valuation of the plant process conditions. Especially these information have a great priority as well as for the emergency operation and post accident management. The rapid and great transitions resulting from that are hardly to master by the used measuring devices. Spurious indications can occur the cause of which could be a modification of design conditions, specific transients of process and the damage of the measuring instrument itself respectively during accidents. Further more it would be desirable to get additional not measurable state variables in this situation. For solving those problems modern methods and procedures of process identification, parameter identification and plausibility analysis comprising correction algorithms become more and more important. These modern methods are used to solve the following problems - diagnosis of the process state on the basis of combination by measuring variables, analytical redundancy and linguistic declarations, - reconstruction of not directly measurable variables and parameters respectively - detection and identification of process faults and instrumentation faults (diagnosis) - reconfiguration of measuring signals (correction). The reconstruction of process state is thus a combination of measured quantity, reconstructed state variables and analytical redundancy using model-based measuring methods. The use of model based measuring methods has been investigated on the example of hydrostatic level measurement on horizontal steam generators. The results of experiments on pilot plants as well as comparison with calculations of empowered programs for instants ATHLET and methods of parameter identification serve as a verification of methods and algorithms, which were developed. This paper describe the main facts of this work

  12. Evaluation of the Diagnostic Accuracy of Serum D-Dimer Levels in Pregnant Women with Adnexal Torsion

    Hasan Onur Topçu

    2015-01-01

    Full Text Available We aimed to evaluate the diagnostic accuracy of serum D-dimer levels in pregnant women with adnexal torsion (AT. The pregnant women with ovarian cysts who suffered from pelvic pain were divided into two groups; the first group consisted of the cases with surgically proven as AT (n = 17 and the second group consisted of the cases whose pain were resolved in the course of follow-up period without required surgery (n = 34. The clinical characteristics and serum D-dimer levels were compared between the groups. Patients with AT had a higher rate of elevated serum white blood cell (WBC count (57% vs. 16%, p = 0.04 and serum D-dimer levels (77% vs. 21%, p < 0.01 on admission in the study group than in the control group. Elevated D-dimer and cyst diameter larger than 5 cm yielded highest sensitivity (82% for each; whereas the presence of nausea and vomiting and elevated CRP had the highest specificity (85% and 88%, respectively. This is the first study that evaluates the serum D-dimer levels in humans in the diagnosis of AT, and our findings supported the use of D-dimer for the early diagnosis of AT in pregnant women.

  13. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  14. Context, accuracy, and level of inclusion of nature of science concepts in current high school physics textbooks

    Alshamrani, Saeed Mohammed

    To improve K-12 students' images of the nature of science (NOS) through science textbooks, two issues must be addressed: (a) the level of NOS that ought to be included in science textbooks and (b) the treatment of this level in those textbooks. Science educators achieved a consensus level of agreement regarding what NOS aspects should be taught for K-12 science learners; however, there is a need for more clarification regarding the actual treatment of NOS in science textbooks. The purpose of this study is to investigate the NOS inclusion in high school physics textbooks. To be specific, this study examines the included NOS aspects, the frequency of NOS inclusion, the contexts exist for NOS inclusion, and the accuracy of NOS inclusion. This study utilized 12 science education studies to develop the Master Aspects of Nature of Science [MA-NOS] which includes 12 NOS aspects that ought to be included in K-12 science curriculum. The analyzed textbooks in this study are seven textbooks identified by The American Institute of Physics as the most widely used high school physics textbooks in the United States in 2005. These textbooks were used in teaching five academic levels: (a) Regular First-Year Physics, (b) Physics for Non-Science Students, (c) Honors Physics, (d) AP-B Physics, and (e) AP-C Physics. The researcher selected exclusively physics textbooks because physics is his main interest. To facilitate the content analysis of the selected textbooks, the study developed The Collection Data Coding Guide which includes six parts describing the MA-NOS aspects and the process of identifying and collecting data. For each NOS aspect, a description and one or more selected ideal indicators were provided to facilitate data collecting and judging the accuracy of NOS inclusion. This coding guide was reviewed for its content validity by two science educators who specialize in NOS. However, two types of reliability were conducted to identify the consistency of selecting NOS units

  15. The energy-spectrum of light primaries in the range from 10^{16.6} to 10^{18.2} eV

    Schoo, S; Arteaga-Velazquez, J C; Bekk, K; Bertaina, M; Bluemer, J; Bozdog, H; Brancus, I M; Cantoni, E; Chiavassa, A; Cossavella, F; Curcio, C; Daumiller, K; de Souza, V; Di Pierro, F; Doll, P; Engel, R; Engler, J; Fuchs, B; Fuhrmann, D; Gils, H J; Glasstetter, R; Grupen, C; Haungs, A; Heck, D; Hoerandel, J R; Huber, D; Huege, T; Kampert, K -H; Kang, D; Klages, H O; Link, K; Luczak, P; Ludwig, M; Mathes, H J; Mayer, H J; Melissas, M; Milke, J; Mitrica, B; Morello, C; Oehlschlaeger, J; Ostapchenko, S; Palmieri, N; Petcu, M; Pierog, T; Rebel, H; Roth, M; Schieler, H; Schroeder, F G; Sima, O; Toma, G; Trinchero, G C; Ulrich, H; Weindl, A; Wochele, D; Wochele, J

    2013-01-01

    Data of the Grande extension of the KASCADE experiment allows us to study extensive air showers induced by primary cosmic rays with energies above 10^{16} eV. The energy of an event is estimated in terms of the number of charged particles (Nch ) and the number of muons (N{\\mu} ) measured at an altitude of 110 m a.s.l. While a combination of the two numbers is used for the energy, the ratio defines the primary mass (group). The spectrum of the combined light and medium mass components, recently measured with KASCADE-Grande, was found to be compatible with both a single power-law and a broken power-law in the energy range between 10^{16.3} and 10^{18} eV. In this contribution we will present the investigation of possible structures in the spectrum of light primaries with increased statistics both from a larger data set including more recent measurements and by using a larger fiducial area than in the previous study. With the better statistical accuracy and with optimized selection criteria for enhancing light p...

  16. Use of Low-Level Sensor Data to Improve the Accuracy of Bluetooth-Based Travel Time Estimation

    Araghi, Bahar Namaki; Christensen, Lars Tørholm; Krishnan, Rajesh;

    2013-01-01

    Bluetooth sensors have a large detection zone compared with other static vehicle reidentification systems. A larger detection zone increases the probability of detecting a Bluetooth-enabled device in a fast-moving vehicle, yet increases the probability of multiple detection events being triggered...... by a single device. The latter situation could lead to location ambiguity and could reduce the accuracy of travel time estimation. Therefore, the accuracy of travel time estimation by Bluetooth technology depends on how location ambiguity is handled by the estimation method. The issue of multiple detection...... events in the context of travel time estimation by Bluetooth technology has been considered by various researchers. However, treatment of this issue has been simplistic. Most previous studies have used the first detection event (enter-enter) as the best estimate. No systematic analysis has been conducted...

  17. Particle distributions in approximately 10(14) 10(16) eV air shower cores at sea level

    Hodson, A. L.; Ash, A. G.; Bull, R. M.

    1985-01-01

    Experimental evidence is reported for fixed distances (0, 1.0, 2.5 and 4.0 m) from the shower centers and for core flattening. The cores become flatter, on average, as the shower size (primary energy) increases. With improved statistics on 4192 cores, the previous results are exactly confirmed.

  18. Impact of dose rate on accuracy of intensity modulated radiation therapy plan delivery using the pretreatment portal dosimetry quality assurance and setting up the workflow at hospital levels

    Karunakaran Kaviarasu; N Arunai Nambi Raj; Krishna Murthy, K.; A Ananda Giri Babu; Bhaskar Laxman Durga Prasad

    2015-01-01

    The aim of this study was to examine the impact of dose rate on accuracy of intensity modulated radiation therapy (IMRT) plan delivery by comparing the gamma agreement between the calculated and measured portal doses by pretreatment quality assurance (QA) using electronic portal imaging device dosimetry and creating a workflow for the pretreatment IMRT QA at hospital levels. As the improvement in gamma agreement leads to increase in the quality of IMRT treatment delivery, gamma evaluation was...

  19. Computational detection of allergenic proteins attains a new level of accuracy with in silico variable-length peptide extraction and machine learning

    Soeria-Atmadja, D.; Lundell, T.; Gustafsson, M. G.; Hammerling, U.

    2006-01-01

    The placing of novel or new-in-the-context proteins on the market, appearing in genetically modified foods, certain bio-pharmaceuticals and some household products leads to human exposure to proteins that may elicit allergic responses. Accurate methods to detect allergens are therefore necessary to ensure consumer/patient safety. We demonstrate that it is possible to reach a new level of accuracy in computational detection of allergenic proteins by presenting a novel detector, Detection based...

  20. The Development of Expertise in Radiology: In Chest Radiograph Interpretation, "Expert" Search Pattern May Predate "Expert" Levels of Diagnostic Accuracy for Pneumothorax Identification.

    Kelly, Brendan S; Rainford, Louise A; Darcy, Sarah P; Kavanagh, Eoin C; Toomey, Rachel J

    2016-07-01

    Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article. PMID:27322975

  1. Principal Components of Superhigh-Dimensional Statistical Features and Support Vector Machine for Improving Identification Accuracies of Different Gear Crack Levels under Different Working Conditions

    Dong Wang

    2015-01-01

    Full Text Available Gears are widely used in gearbox to transmit power from one shaft to another. Gear crack is one of the most frequent gear fault modes found in industry. Identification of different gear crack levels is beneficial in preventing any unexpected machine breakdown and reducing economic loss because gear crack leads to gear tooth breakage. In this paper, an intelligent fault diagnosis method for identification of different gear crack levels under different working conditions is proposed. First, superhigh-dimensional statistical features are extracted from continuous wavelet transform at different scales. The number of the statistical features extracted by using the proposed method is 920 so that the extracted statistical features are superhigh dimensional. To reduce the dimensionality of the extracted statistical features and generate new significant low-dimensional statistical features, a simple and effective method called principal component analysis is used. To further improve identification accuracies of different gear crack levels under different working conditions, support vector machine is employed. Three experiments are investigated to show the superiority of the proposed method. Comparisons with other existing gear crack level identification methods are conducted. The results show that the proposed method has the highest identification accuracies among all existing methods.

  2. The conservation value of elevation data accuracy and model sophistication in reserve design under sea‐level rise

    Zhu, Mingjian; Hoctor, Tom; Volk, Mike; Frank, Kathryn; Linhoss, Anna

    2015-01-01

    Abstract Many studies have explored the value of using more sophisticated coastal impact models and higher resolution elevation data in sea‐level rise (SLR) adaptation planning. However, we know little about to what extent the improved models and data could actually lead to better conservation outcomes under SLR. This is important to know because high‐resolution data are likely to not be available in some data‐poor coastal areas in the world and running more complicated coastal impact models ...

  3. Accuracy enhancement for forecasting water levels of reservoirs and river streams using a multiple-input-pattern fuzzification approach.

    Valizadeh, Nariman; El-Shafie, Ahmed; Mirzaei, Majid; Galavi, Hadi; Mukhlisin, Muhammad; Jaafar, Othman

    2014-01-01

    Water level forecasting is an essential topic in water management affecting reservoir operations and decision making. Recently, modern methods utilizing artificial intelligence, fuzzy logic, and combinations of these techniques have been used in hydrological applications because of their considerable ability to map an input-output pattern without requiring prior knowledge of the criteria influencing the forecasting procedure. The artificial neurofuzzy interface system (ANFIS) is one of the most accurate models used in water resource management. Because the membership functions (MFs) possess the characteristics of smoothness and mathematical components, each set of input data is able to yield the best result using a certain type of MF in the ANFIS models. The objective of this study is to define the different ANFIS model by applying different types of MFs for each type of input to forecast the water level in two case studies, the Klang Gates Dam and Rantau Panjang station on the Johor river in Malaysia, to compare the traditional ANFIS model with the new introduced one in two different situations, reservoir and stream, showing the new approach outweigh rather than the traditional one in both case studies. This objective is accomplished by evaluating the model fitness and performance in daily forecasting. PMID:24790567

  4. The conservation value of elevation data accuracy and model sophistication in reserve design under sea-level rise.

    Zhu, Mingjian; Hoctor, Tom; Volk, Mike; Frank, Kathryn; Linhoss, Anna

    2015-10-01

    Many studies have explored the value of using more sophisticated coastal impact models and higher resolution elevation data in sea-level rise (SLR) adaptation planning. However, we know little about to what extent the improved models and data could actually lead to better conservation outcomes under SLR. This is important to know because high-resolution data are likely to not be available in some data-poor coastal areas in the world and running more complicated coastal impact models is relatively time-consuming, expensive, and requires assistance by qualified experts and technicians. We address this research question in the context of identifying conservation priorities in response to SLR. Specifically, we investigated the conservation value of using more accurate light detection and ranging (Lidar)-based digital elevation data and process-based coastal land-cover change models (Sea Level Affecting Marshes Model, SLAMM) to identify conservation priorities versus simple "bathtub" models based on the relatively coarse National Elevation Dataset (NED) in a coastal region of northeast Florida. We compared conservation outcomes identified by reserve design software (Zonation) using three different model dataset combinations (Bathtub-NED, Bathtub-Lidar, and SLAMM-Lidar). The comparisons show that the conservation priorities are significantly different with different combinations of coastal impact models and elevation dataset inputs. The research suggests that it is valuable to invest in more accurate coastal impact models and elevation datasets in SLR adaptive conservation planning because this model-dataset combination could improve conservation outcomes under SLR. Less accurate coastal impact models, including ones created using coarser Digital Elevation Model (DEM) data can still be useful when better data and models are not available or feasible, but results need to be appropriately assessed and communicated. A future research priority is to investigate how

  5. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  6. Two Methods to Derive Ground-level Concentrations of PM2.5 with Improved Accuracy in the North China, Calibrating MODIS AOD and CMAQ Model Predictions

    Lyu, Baolei; Hu, Yongtao; Chang, Howard; Russell, Armistead; Bai, Yuqi

    2016-04-01

    Reliable and accurate characterizations of ground-level PM2.5 concentrations are essential to understand pollution sources and evaluate human exposures etc. Monitoring network could only provide direct point-level observations at limited locations. At the locations without monitors, there are generally two ways to estimate the pollution levels of PM2.5. One is observations of aerosol properties from the satellite-based remote sensing, such as Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth (AOD). The other one is from deterministic atmospheric chemistry models, such as the Community Multi-Scale Air Quality Model (CMAQ). In this study, we used a statistical spatio-temporal downscaler to calibrate the two datasets to monitor observations to derive fine-scale ground-level concentrations of PM2.5 with improved accuracy. We treated both MODIS AOD and CMAQ model predictions as biased proxy estimations of PM2.5 pollution levels. The downscaler proposed a Bayesian framework to model the spatially and temporally varying coefficients of the two types of estimations in the linear regression setting, in order to correct biases. Especially for calibrating MODIS AOD, a city-specific linear model was established to fill the missing AOD values, and a novel interpolation-based variable, i.e. PM2.5 Spatial Interpolator, was introduced to account for the spatial dependence among grid cells. We selected the heavy polluted and populated North China as our study area, in a grid setting of 81×81 12-km cells. For the evaluation of calibration performance for retrieved MODIS AOD, the R2 was 0.61 by the full model with PM2.5 Spatial Interpolator being presented, and was 0.48 with PM2.5 Spatial Interpolator not being presented. The constructed AOD values effectively predicted PM2.5 concentrations under our model structure, with R2=0.78. For the evaluation of calibrated CMAQ predictions, the R2 was 0.51, a little less than that of calibrated AOD. Finally we

  7. Comparison of predictive accuracy of pre surgical serum parathormone (PTH) level with that of parathyroid scan in case of primary hyperparathyroidism

    Full text: Aims and Objective: Parathyroid scintigraphy with Tc-99m Sestamibi is a sensitive and specific test for pre operative localization of parathyroid adenoma (PA) in patients with primary hyperparathyroidism. However false ve studies are not uncommon. Our aim was to find out the predictive accuracy of pre surgical parathormone (PTH) level with that of parathyroid scan in case of primary hyperparathyroidism. Materials And Method: A total of 54 patients (29 male, 25 female) with a mean age of 41. 24+14.26 years suspected of primary hyperparathyroidism were included in this study. All patients had serum PTH and calcium level higher than the normal limit. Parathyroid scintigraphy was done by subtraction method using 185 MBq of Tc-99m PO4 which was given first and images were taken by planar gamma camera after 20 minutes followed by Tc-99m Sestamibi (740MBq) injection without moving the patient. We calculated the sensitivity and specificity at different cut off values of PTH such as >70pg/ml, >80pg/ml, >90pg/ml and >100pg/ml and observed the changes in sensitivity, specificity, PPV and NPV against scintigraphic diagnosis of PA. Result: Parathyroid scintigraphy revealed 15 positive cases (27.8%) amongst 54 patients, which were surgically proven to be so. The sensitivity of PTH in predicting positive parathyroid scan revealed to be 86.7% at serum PTH level of 70-90pg/ml. Then the sensitivity declines steadily to 73.3% at PTH level of >100pg/ml. The specificity increases gradually from 20.5% at serum PTH level >70pg/ml to 53.8% at serum PTH level >100pg/ml. However, PPV and NPV of serum PTH did not experience significant change like sensitivity and specificity with the increase of cut off values. Conclusion: We can use a cut off value of pre surgical serum PTH level at 90pg/ml before doing parathyroid scan as this has maximum sensitivity and optimum specificity. It will help to predict the outcome of scan and avoid unnecessary parathyroid scan and false ve cases

  8. A broadband chip-scale optical frequency synthesizer at 2.7 × 10(-16) relative uncertainty.

    Huang, Shu-Wei; Yang, Jinghui; Yu, Mingbin; McGuyer, Bart H; Kwong, Dim-Lee; Zelevinsky, Tanya; Wong, Chee Wei

    2016-04-01

    Optical frequency combs-coherent light sources that connect optical frequencies with microwave oscillations-have become the enabling tool for precision spectroscopy, optical clockwork, and attosecond physics over the past decades. Current benchmark systems are self-referenced femtosecond mode-locked lasers, but Kerr nonlinear dynamics in high-Q solid-state microresonators has recently demonstrated promising features as alternative platforms. The advance not only fosters studies of chip-scale frequency metrology but also extends the realm of optical frequency combs. We report the full stabilization of chip-scale optical frequency combs. The microcomb's two degrees of freedom, one of the comb lines and the native 18-GHz comb spacing, are simultaneously phase-locked to known optical and microwave references. Active comb spacing stabilization improves long-term stability by six orders of magnitude, reaching a record instrument-limited residual instability of [Formula: see text]. Comparing 46 nitride frequency comb lines with a fiber laser frequency comb, we demonstrate the unprecedented microcomb tooth-to-tooth relative frequency uncertainty down to 50 mHz and 2.7 × 10(-16), heralding novel solid-state applications in precision spectroscopy, coherent communications, and astronomical spectrography. PMID:27152341

  9. Comparing Accuracy of Airborne Laser Scanning and TerraSAR-X Radar Images in the Estimation of Plot-Level Forest Variables

    Juha Hyyppä

    2010-01-01

    Full Text Available In this study we compared the accuracy of low-pulse airborne laser scanning (ALS data, multi-temporal high-resolution noninterferometric TerraSAR-X radar data and a combined feature set derived from these data in the estimation of forest variables at plot level. The TerraSAR-X data set consisted of seven dual-polarized (HH/HV or VH/VV Stripmap mode images from all seasons of the year. We were especially interested in distinguishing between the tree species. The dependent variables estimated included mean volume, basal area, mean height, mean diameter and tree species-specific mean volumes. Selection of best possible feature set was based on a genetic algorithm (GA. The nonparametric k-nearest neighbour (k-NN algorithm was applied to the estimation. The research material consisted of 124 circular plots measured at tree level and located in the vicinity of Espoo, Finland. There are large variations in the elevation and forest structure in the study area, making it demanding for image interpretation. The best feature set contained 12 features, nine of them originating from the ALS data and three from the TerraSAR-X data. The relative RMSEs for the best performing feature set were 34.7% (mean volume, 28.1% (basal area, 14.3% (mean height, 21.4% (mean diameter, 99.9% (mean volume of Scots pine, 61.6% (mean volume of Norway spruce and 91.6% (mean volume of deciduous tree species. The combined feature set outperformed an ALS-based feature set marginally; in fact, the latter was better in the case of species-specific volumes. Features from TerraSAR-X alone performed poorly. However, due to favorable temporal resolution, satellite-borne radar imaging is a promising data source for updating large-area forest inventories based on low-pulse ALS.

  10. Neighborhood disorder and screen time among 10-16 year old Canadian youth: A cross-sectional study

    Carson Valerie

    2012-05-01

    Full Text Available Abstract Background Screen time activities (e.g., television, computers, video games have been linked to several negative health outcomes among young people. In order to develop evidence-based interventions to reduce screen time, the factors that influence the behavior need to be better understood. High neighborhood disorder, which may encourage young people to stay indoors where screen time activities are readily available, is one potential factor to consider. Methods Results are based on 15,917 youth in grades 6-10 (aged 10-16 years old who participated in the Canadian 2009/10 Health Behaviour in School-aged Children Survey (HBSC. Total hours per week of television, video games, and computer use were reported by the participating students in the HBSC student questionnaire. Ten items of neighborhood disorder including safety, neighbors taking advantage, drugs/drinking in public, ethnic tensions, gangs, crime, conditions of buildings/grounds, abandoned buildings, litter, and graffiti were measured using the HBSC student questionnaire, the HBSC administrator questionnaire, and Geographic Information Systems. Based upon these 10 items, social and physical neighborhood disorder variables were derived using principal component analysis. Multivariate multilevel logistic regression analyses were used to examine the relationship between social and physical neighborhood disorder and individual screen time variables. Results High (top quartile social neighborhood disorder was associated with approximately 35-45% increased risk of high (top quartile television, computer, and video game use. Physical neighborhood disorder was not associated with screen time activities after adjusting for social neighborhood disorder. However, high social and physical neighborhood disorder combined was associated with approximately 40-60% increased likelihood of high television, computer, and video game use. Conclusion High neighborhood disorder is one environmental

  11. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  12. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  13. Effects of pH, lactate, hematocrit and potassium level on the accuracy of continuous glucose monitoring (CGM) in pediatric intensive care unit

    Marics, Gábor; Koncz, Levente; Eitler, Katalin; Vatai, Barbara; Szénási, Boglárka; Zakariás, David; Mikos, Borbála; Körner, Anna; Tóth-Heyn, Péter

    2015-01-01

    Background Continuous glucose monitoring (CGM) originally was developed for diabetic patients and it may be a useful tool for monitoring glucose changes in pediatric intensive care unit (PICU). Its use is, however, limited by the lack of sufficient data on its reliability at insufficient peripheral perfusion. We aimed to correlate the accuracy of CGM with laboratory markers relevant to disturbed tissue perfusion. Patients and Methods In 38 pediatric patients (age range, 0–18 years) requiring ...

  14. Experiences on improving diagnostic accuracy of FDG PET by characterizing, reducing, and avoiding the high-level physiological uptakes at abdomen and pelvic region

    ovaries, and were correlated well with the menstrual cycles. The 3 pseudo-negatives wer e gastric cancer, and although their SUVs decreased in the delayed imaging by gastric distension with foods, the lesions were actually more prominent with higher contrast. Discussions: Gastric cancer is one of the most common malignancies, especially in Asian population. However, FDG PET routinely performed under fasting status demonstrates limited value to its diagnosis because of the high-level physiological uptake of stomach. Gastric distension just before FDG PET scanning can benefit the early detection and accurate evaluation of primary tumor of gastric cancer. Physiological uptake of uterus and ovaries can mimic pelvic malignancies on FDG PET. For female patients with regular menstrual cycle, PET studies at late menstrual flow phase and early proliferative phase showed the least probability of intense or moderate physiological uptake. Delayed imaging, especially after changing physiological status by drinking, food ingestion, urination, and bowel movement, can help to exclude the physiological uptakes of gastrointestinal and urinary system. Conclusions: The diagnostic accuracy of FDG PET at abdomen and pelvic region can be improved by good understanding of the physiological uptakes, intentionally changing the physiological status, and proper arrangement of PET scanning. (author)

  15. The Epidemiology of Primary Anterior Shoulder Dislocations in Patients Aged 10-16 Years and Age-Stratified Risk of Recurrence

    Leroux, Timothy; Ogilvie-Harris, Darrell; Veillette, Christian; Chahal, Jaskarndip; Dwyer, Tim; Henry, Patrick; Khoshbin, Amir; Mahomed, Nizar; Wasserstein, David

    2015-01-01

    Objectives: Most clinical studies pertaining to shoulder dislocation use age cutoffs of 16 years, and at present, only small case series of patients aged 10-16 years guide our management. Using a general population cohort aged 10 to 16 years, we sought to: 1) determine the overall and demographic-specific incidence density rate (IDR) of primary anterior shoulder dislocation requiring closed reduction (CR), and 2) determine the rate of and risk factors for repeat shoulder CR. Methods: Using ad...

  16. Beta-delayed proton emission: a new series of precursors and the measurement of 10-16 s nuclear lifetimes

    We have now obtained results on a new series of even-Z precursors with Tsub(z) = +1/2. Like all known heavy precursors, the nuclei so far identified - 65Ge, 69Se, 73Kr, 77Sr, 81Zr and provisionally 85Mo - exhibit broad proton continua. However, the availability of such a series of nuclei makes it possible to extract a systematic picture of the beta-decay strength function as well as level densities, widths and decay energies from the observed spectra. By the addition of a new experimental technique we have also been able to determine the absolute values of the widths through direct measurement of the average lifetime of states populated in the emitter. These data all provide stringent tests of model calculations and mass formulae in a region of nuclei with N approximately Z, far removed from the valley of stability. (author)

  17. Low-level measuring techniques for neutrons: High accuracy neutron source strength determination and fluence rate measurement at an underground laboratory

    Zimbal, Andreas; Reginatto, Marcel; Schuhmacher, Helmut; Wiegel, Burkhard [Physikalisch-Technische Bundesanstalt, Bundesallee 100, 38116 Braunschweig (Germany); Degering, Detlev [Verein für Kernverfahrenstechnik und Analytik Rossendorf e. V. (VKTA), D-01314 Dresden (Germany); Zuber, Kai [Technische Universität Dresden, D-01069 Dresden (Germany)

    2013-08-08

    We report on measuring techniques for neutrons that have been developed at the Physikalisch-Technische Bundesanstalt (PTB), the German National Metrology Institute. PTB has characterized radioactive sources used in the BOREXINO and XENON100 experiments. For the BOREXINO experiment, a {sup 228}Th gamma radiation source was required which would not emit more than 10 neutrons per second. The determination of the neutron emission rate of this specially designed {sup 228}Th source was challenging due to the low neutron emission rate and because the ratio of neutron to gamma radiation was expected to be extremely low, of the order of 10{sup −6}. For the XENON100 detector, PTB carried out a high accuracy measurement of the neutron emission rate of an AmBe source. PTB has also done measurements in underground laboratories. A two month measurement campaign with a set of {sup 3}He-filled proportional counters was carried out in PTB's former UDO underground laboratory at the Asse salt mine. The aim of the campaign was to determine the intrinsic background of detectors, which is needed for the analysis of data taken in lowintensity neutron fields. At a later time, PTB did a preliminary measurement of the neutron fluence rate at the underground laboratory Felsenkeller operated by VKTA. By taking into account data from UDO, Felsenkeller, and detector calibrations made at the PTB facility, it was possible to estimate the neutron fluence rate at the Felsenkeller underground laboratory.

  18. Target Price Accuracy

    Alexander G. Kerl

    2011-01-01

    This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown) 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio). However, target price accuracy is positive...

  19. The study in the primary energy range 10^{16} - 10^{17} eV with the Muon Tracking Detector in the KASCADE-Grande experiment

    Łuczak, P; Arteaga-Velázquez, J C; Bekk, K; Bertaina, M; Blümer, J; Bozdog, H; Brancus, I M; Cantoni, E; Chiavassa, A; Cossavella, F; Curcio, C; Daumiller, K; de Souza, V; Di Pierro, F; Doll, P; Engel, R; Engler, J; Fuchs, B; Fuhrmann, D; Gils, H J; Glasstetter, R; Grupen, C; Haungs, A; Heck, D; Hörandel, J R; Huber, D; Huege, T; Kampert, K -H; Kang, D; Klages, H O; Link, K; Ludwig, M; Mathes, H J; Mayer, H J; Melissas, M; Milke, J; Mitrica, B; Morello, C; Oehlschläger, J; Ostapchenko, S; Palmieri, N; Petcu, M; Pierog, T; Rebel, H; Roth, M; Schieler, H; Schoo, S; Schröder, F G; Sima, O; Toma, G; Trinchero, G C; Ulrich, H; Weindl, A; Wochele, J; Zabierowski, J

    2013-01-01

    The KASCADE-Grande Muon Tracking Detector enables with high accuracy the measurement of directions of EAS muons with energy above 0.8 GeV and up to 700 m distance from the shower centre. Reconstructed muon tracks are used to investigate muon pseudorapidity (eta) distributions. These distributions are nearly identical to the pseudorapidity distributions of their parent mesons produced in hadronic interactions. Comparison of the eta distributions from measured and simulated showers can be used to test the quality of the high energy hadronic interaction models. In this context a comparison of the QGSJet-II-2 and QGSJet-II-4 model will be shown. The pseudorapidity distributions reflect the longitudinal development of EAS and, as such, are sensitive to the mass of the cosmic rays primary particles. With various parameters of the eta distribution, obtained from the MTD data, it is possible to calculate the mean logarithmic mass of CRs. The results of the analysis in the primary energy range 10^{16} eV - 10^{17} eV...

  20. Dual-Energy CT-based Display of Bone Marrow Edema in Osteoporotic Vertebral Compression Fractures: Impact on Diagnostic Accuracy of Radiologists with Varying Levels of Experience in Correlation to MR Imaging.

    Kaup, Moritz; Wichmann, Julian L; Scholtz, Jan-Erik; Beeres, Martin; Kromen, Wolfgang; Albrecht, Moritz H; Lehnert, Thomas; Boettcher, Marie; Vogl, Thomas J; Bauer, Ralf W

    2016-08-01

    Purpose To evaluate whether a dual-energy (DE) computed tomographic (CT) virtual noncalcium technique can improve the detection rate of acute thoracolumbar vertebral compression fractures in patients with osteoporosis compared with that at magnetic resonance (MR) imaging depending on the level of experience of the reading radiologist. Materials and Methods This retrospective study was approved by the institutional ethics committee. Informed consent was obtained from all patients. Forty-nine patients with osteoporosis who were suspected of having acute vertebral fracture underwent DE CT and MR imaging. Conventional linear-blended CT scans and corresponding virtual noncalcium reconstructions were obtained. Five radiologists with varying levels of experience evaluated gray-scale CT scans for the presence of fractures and their suspected age. Then, virtual noncalcium images were evaluated to detect bone marrow edema. Findings were compared with those from MR imaging (the standard of reference). Sensitivity and specificity analyses for diagnostic performance and matched pair analyses were performed on vertebral fracture and patient levels. Results Sixty-two fractures were classified as fresh and 52 as old at MR imaging. The diagnostic performance of all readers in the detection of fresh fractures improved with the addition of virtual noncalcium reconstructions compared with that with conventional CT alone. Although the diagnostic accuracy of the least experienced reader with virtual noncalcium CT (accuracy with CT alone, 61%; accuracy with virtual noncalcium technique, 83%) was within the range of that of the most experienced reader with CT alone, the latter improved his accuracy with the noncalcium technique (from 81% to 95%), coming close to that with MR imaging. The number of vertebrae rated as unclear decreased by 59%-90% or from 15-53 to 2-13 in absolute numbers across readers. The number of patients potentially referred to MR imaging decreased by 36%-87% (from 11

  1. Magnetic resonance imaging and magnetic resonance arthrography of the shoulder: dependence on the level of training of the performing radiologist for diagnostic accuracy

    Theodoropoulos, John S. [University of Toronto, Division of Orthopaedics, Mount Sinai Hospital and the University Health Network, Toronto, ON (Canada); Andreisek, Gustav [University of Toronto, Department of Medical Imaging, Mount Sinai Hospital and the University Health Network, Toronto, ON (Canada); University Hospital Zuerich, Institute for Diagnostic Radiology, Zuerich (Switzerland); Harvey, Edward J. [McGill University, Division of Orthopaedics, MUHC - Montreal General Hospital, Montreal, Quebec (Canada); Wolin, Preston [Center for Athletic Medicine, Chicago, IL (United States)

    2010-07-15

    Discrepancies were identified between magnetic resonance (MR) imaging and clinical findings in patients who had MR imaging examinations evaluated by community-based general radiologists. The purpose of this study was to evaluate the diagnostic performance of MR imaging examinations of the shoulder with regard to the training level of the performing radiologist. A review of patient charts identified 238 patients (male/female, 175/63; mean age, 40.4 years) in whom 250 arthroscopies were performed and who underwent MR imaging or direct MR arthrography in either a community-based or hospital-based institution prior to surgery. All MR imaging and surgical reports were reviewed and the diagnostic performance for the detection of labral, rotator cuff, biceps, and Hill-Sachs lesions was determined. Kappa and Student's t test analyses were performed in a subset of cases in which initial community-based MR images were re-evaluated by hospital-based musculoskeletal radiologists, to determine the interobserver agreement and any differences in image interpretation. The diagnostic performance of community-based general radiologists was lower than that of hospital-based sub-specialized musculoskeletal radiologists. A sub-analysis of re-evaluated cases showed that musculoskeletal radiologists performed better. {kappa} values were 0.208, 0.396, 0.376, and 0.788 for labral, rotator cuff, biceps, and Hill-Sachs lesions (t test statistics: p =<0.001, 0.004, 0.019, and 0.235). Our results indicate that the diagnostic performance of MR imaging and MR arthrography of the shoulder depends on the training level of the performing radiologist, with sub-specialized musculoskeletal radiologists having a better diagnostic performance than general radiologists. (orig.)

  2. Relative accuracy evaluation.

    Yan Zhang

    Full Text Available The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms.

  3. Relative accuracy evaluation.

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  4. Yakutsk array radio emission registration results in the energy range of 3*10^16-5*10^18 eV

    Petrov, I; Petrov, Z; Kozlov, V; Pravdin, M

    2013-01-01

    This paper presents the set of measurements of ultra-high energy air shower radio emission at frequency 32 MHz in period of 2008-2012. The showers are selected by geomagnetic and azimuth angles and then by the energy in three intervals: 3*10^16 3*10^17 eV, 3*10^17 6*10^17 eV and 6*10^17 5*10^18 eV. In each energy interval average lateral distribution function using mathematically averaged data from antennas with di?fferent directions are plotted. In the paper, using experimental data the dependence of radio signal averaged amplitude from geomagnetic angle, the shower axis distance and the energy are determined. Depth of maximum of cosmic ray showers Xmax for the given energy range is evaluated. The evaluation is made according QGSJET model calculations and average lateral distribution function shape.

  5. Rethinking Empathic Accuracy

    Meadors, Joshua

    2014-01-01

    The present study is a methodological examination of the implicit empathic accuracy measure introduced by Zaki, Ochsner, and Bolger (2008). Empathic accuracy (EA) is defined as the ability to understand another person's thoughts and feelings (Ickes, 1993). Because this definition is similar to definitions of cognitive empathy (e.g., Shamay-Tsoory, 2011) and because affective empathy does not appear to be related to empathic accuracy (Zaki et al., 2008), the Basic Empathy Scale--which measures...

  6. The Truth about Accuracy

    Buekens, Filip; Truyen, Frederik

    2014-01-01

    When we evaluate the outcomes of investigative actions as justified or unjustified, good or bad, rational or irrational, we make, in a broad sense of the term, evaluative judgments about them. We look at operational accuracy as a desirable and evaluable quality of the outcomes and explore how the concepts of accuracy and precision, on the basis of insights borrowed from pragmatics and measurement theory, can be seen to do useful work in epistemology. Operational accuracy (but not metaphysical...

  7. Classification Accuracy Is Not Enough

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically...... different state-of-the-art MGR systems, that classification accuracy does not necessarily reflect the capacity of a system to recognize genre in musical signals. We argue that a more comprehensive analysis of behavior at the level of the music is needed to address the problem of MGR, and that measuring...... classification accuracy obscures the aim of MGR: to select labels indistinguishable from those a person would choose....

  8. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    Valverde-Albacete, Francisco J.; Carmen Peláez-Moreno

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are dep...

  9. Chemical, physical, profile and laboratory analysis oceanographic data collected aboard the OCEAN VERITAS in the Gulf of Mexico from 2010-09-07 to 2010-10-16 in response to the Deepwater Horizon Oil Spill event (NODC Accession 0069109)

    National Oceanic and Atmospheric Administration, Department of Commerce — Chemical, physical, profile and laboratory analysis oceanographic data were collected aboard the OCEAN VERITAS in the Gulf of Mexico from 2010-09-07 to 2010-10-16...

  10. Genetic structure of different cat populations in Europe and South America at a microgeographic level: importance of the choice of an adequate sampling level in the accuracy of population genetics interpretations

    Manuel Ruiz-Garcia

    1999-12-01

    Full Text Available The phenotypic markers, coat color, pattern and hair length, of natural domestic cat populations observed in four cities (Barcelona, Catalonia; Palma Majorca, Balearic Islands; Rimini, Italy and Buenos Aires, Argentina were studied at a microgeographical level. Various population genetics techniques revealed that the degree of genetic differentiation between populations of Felis catus within these cities is relatively low, when compared with that found between populations of other mammals. Two different levels of sampling were used. One was that of "natural" colonies of cat families living together in specific points within the cities, and the other referred to "artificial" subpopulations, or groups of colonies, inhabiting the same district within a city. For the two sampling levels, some of the results were identical: 1 little genic heterogeneity, 2 existence of panmixia, 3 similar levels of expected heterozygosity in all populations analyzed, 4 no spatial autocorrelation, with certain differentiation in the Buenos Aires population compared to the others, and 5 very high correlations between colonies and subpopulations with the first factors from a Q factor analysis. Nevertheless, other population genetic statistics were greatly affected by the differential choice of sampling level. This was the case for: 1 the amount of heterogeneity of the FST and GST statistics between the cities, which was greater at the subpopulation level than at colony level, 2 the existence of correlations between genic differentiation statistics and size variables at subpopulation level, but not at the colony level, and 3 the relationships between the genetic variables and the principal factors of the R factorial analysis. This suggests that care should be taken in the choice of the sampling unit, for inferences on population genetics to be valid at the microgeographical level.Os marcadores fenotípicos cor da pelagem, padrão e comprimento dos pelos de popula

  11. Crystal Stratigraphy of Two Basalts from Apollo 16: Unique Crystallization of Picritic Basalt 606063,10-16 and Very-Low-Titanium Basalt 65703,9-13

    Donohue, P. H.; Neal, C. R.; Stevens, R. E.; Zeigler, R. A.

    2014-01-01

    A geochemical survey of Apollo 16 regolith fragments found five basaltic samples from among hundreds of 2-4 mm regolith fragments of the Apollo 16 site. These included a high-Ti vitrophyric basalt (60603,10-16) and one very-low-titanium (VLT) crystalline basalt (65703,9-13). Apollo 16 was the only highlands sample return mission distant from the maria (approx. 200 km). Identification of basaltic samples at the site not from the ancient regolith breccia indicates input of material via lateral transport by post-basin impacts. The presence of basaltic rocklets and glass at the site is not unprecedented and is required to satisfy mass-balance constraints of regolith compositions. However, preliminary characterization of olivine and plagioclase crystal size distributions indicated the sample textures were distinct from other known mare basalts, and instead had affinities to impact melt textures. Impact melt textures can appear qualitatively similar to pristine basalts, and quantitative analysis is required to distinguish between the two in thin section. The crystal stratigraphy method is a powerful tool in studying of igneous systems, utilizing geochemical analyses across minerals and textural analyses of phases. In particular, trace element signatures can aid in determining the ultimate origin of these samples and variations document subtle changes occurring during their petrogenesis.

  12. Diagnosing Eyewitness Accuracy

    Russ, Andrew

    2015-01-01

    Eyewitnesses frequently mistake innocent people for the perpetrator of an observed crime. Such misidentifications have led to the wrongful convictions of many people. Despite this, no reliable method yet exists to determine eyewitness accuracy. This thesis explored two new experimental methods for this purpose. Chapter 2 investigated whether repetition priming can measure prior exposure to a target and compared this with observers’ explicit eyewitness accuracy. Across three experiments slower...

  13. Social Security Administration Data for Enumeration Accuracy

    Social Security Administration — This dataset provides data at the national level from federal fiscal year 2006 onwards for the accuracy of the assignment of Social Security numbers (SSN) based on...

  14. Accuracy of Approximate Eigenstates

    Lucha, Wolfgang; Lucha, Wolfgang

    2000-01-01

    Besides perturbation theory, which requires, of course, the knowledge of the exact unperturbed solution, variational techniques represent the main tool for any investigation of the eigenvalue problem of some semibounded operator H in quantum theory. For a reasonable choice of the employed trial subspace of the domain of H, the lowest eigenvalues of H usually can be located with acceptable precision whereas the trial-subspace vectors corresponding to these eigenvalues approximate, in general, the exact eigenstates of H with much less accuracy. Accordingly, various measures for the accuracy of the approximate eigenstates derived by variational techniques are scrutinized. In particular, the matrix elements of the commutator of the operator H and (suitably chosen) different operators, with respect to degenerate approximate eigenstates of H obtained by some variational method, are proposed here as new criteria for the accuracy of variational eigenstates. These considerations are applied to that Hamiltonian the eig...

  15. Diagnostic test accuracy

    Campbell, Jared M.; Klugar, Miloslav; Ding, Sandrine; Carmody, Dennis P.; Håkonsen, Sasja Jul; Jadotte, Yuri T.; White, Sarahlouise; Munn, Zachary

    2015-01-01

    in providing methodological guidance for the conduct of systematic reviews and has developed methods and guidance for reviewers conducting systematic reviews of studies of diagnostic test accuracy. Diagnostic tests are used to identify the presence or absence of a condition for the purpose of...... developing an appropriate treatment plan. Owing to demands for improvements in speed, cost, ease of performance, patient safety, and accuracy, new diagnostic tests are continuously developed, and there are often several tests available for the diagnosis of a particular condition. In order to provide the...... evidence necessary for clinicians and other healthcare professionals to make informed decisions regarding the optimum test to use, primary studies need to be carried out on the accuracy of diagnostic tests and the results of these studies synthesized through systematic review. The Joanna Briggs Institute...

  16. To determine the accuracy of focused assessment with sonography for trauma done by nonradiologists and its comparative analysis with radiologists in emergency department of a level 1 trauma center of India

    Sanjeev Bhoi

    2013-01-01

    Full Text Available Background: Focused assessment with sonography for trauma (FAST is an important skill during trauma resuscitation. Use of point of care ultrasound among the trauma team working in emergency care settings is lacking in India. Objective: To determine the accuracy of FAST done by nonradiologists (NR when compared to radiologists during primary survey of trauma victims in the emergency department of a level 1 trauma center in India. Materials and Methods: A prospective study was done during primary survey of resuscitation of nonconsecutive patients in the resuscitation bay. The study subjects included NR such as one consultant emergency medicine, two medicine residents, one orthopedic resident and one surgery resident working as trauma team. These subjects underwent training at 3-day workshop on emergency sonography and performed 20 supervised positive and negative scans for free fluid. The FAST scans were first performed by NR and then by radiology residents (RR. The performers were blinded to each other′s sonography findings. Computed tomography (CT and laparotomy findings were used as gold standard whichever was feasible. Results were compared between both the groups. Intraobserver variability among NR and RR were noted. Results: Out of 150 scans 144 scans were analyzed. Mean age of the patients was 28 [1-70] years. Out of 24 true positive patients 18 underwent CT scan and exploratory laparotomies were done in six patients. Sensitivity of FAST done by NR and RR were 100% and 95.6% and specificity was 97.5% in both groups. Positive predictive value among NR and RR were 88.8%, 88.46% and negative predictive value were 97.5% and 99.15%. Intraobserver performance variation ranged from 87 to 97%. Conclusion: FAST performed by NRs is accurate during initial trauma resuscitation in the emergency department of a level 1 trauma center in India.

  17. Does the experience level of the radiologist, assessment in consensus, or the addition of the abduction and external rotation view improve the diagnostic reproducibility and accuracy of MRA of the shoulder?

    Aim: To prospectively evaluate the influence of observer experience, consensus assessment, and abduction and external rotation (ABER) view on the diagnostic performance of magnetic resonance arthrography (MRA) in patients with traumatic anterior-shoulder instability (TASI). Materials and methods: Fifty-eight MRA examinations (of which 51 had additional ABER views) were assessed by six radiologists (R1–R6) and three teams (T1–T3) with different experience levels, using a seven-lesion standardized scoring form. Forty-five out of 58 MRA examination findings were surgically confirmed. Kappa coefficients, sensitivity, specificity, and differences in percent agreement or correct diagnosis (p-value, McNemar's test) were calculated per lesion and overall per seven lesion types to assess diagnostic reproducibility and accuracy. Results: Overall kappa ranged from poor (k = 0.17) to moderate (k = 0.53), sensitivity from 30.6–63.5%, and specificity from 73.6–89.9%. Overall, the most experienced radiologists (R1–R2) and teams (T2–T3) agreed significantly more than the lesser experienced radiologists (R3–R4: p = 0.014, R5–R6; p = 0.018) and teams (T2–T3: p = 0.007). The most experienced radiologist (R1, R2, R3) and teams (T1, T2) were also consistently more accurate than the lesser experienced radiologists (R4, R5, R6) and team (T3). Significant differences were found between R1–R4 (p = 0.012), R3–R4 (p = 0.03), and T2–T3 (p = 0.014). The overall performance of consensus assessment was systematically higher than individual assessment. Significant differences were established between T1–T2 and radiologists R3–R4 (p<0.001, p = 0.001) and between T2 and R3 (p<0.001/p = 0.001) or R4 (p = 0.050). No overall significant differences were found between the radiologists' assessments with and without ABER. Conclusion: The addition of ABER does not significantly improve overall diagnostic performance. The radiologist

  18. The Accuracy of Multiples

    Stauropoulos Antonios

    2011-01-01

    Full Text Available Problem statement: Equity valuation with the use of multiples is widely used by academics and practitioners concerning its functionality. This study aims to explore the sensitivity of three multiples in terms of accuracy. Approach: Price-to-Sales (P/S multiple, the price-to-book value of equity (P/B multiple and the Price-to-Earnings (P/E multiple are three multiples under consideration, using both current and one-year-ahead earnings forecasts. Results: Evidence of empirical results show that, the multiples P/mdfy1 and P/mnfy1 are effective in terms of accuracy, with their means being negatively biased and their medians being positively biased. Finally, current earnings are identified as more appropriate value driver for the calculation of the P/E ratio by terms of accuracy. The results can be considered as reliable owing to the large sample and the procedure followed for its selection. Conclusion: This study offers a better understanding of the valuation approach through the use of multiples, in order analysts assumption to be more carefully and properly chosen and their results to be more accurately produced.

  19. Overlay accuracy fundamentals

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  20. Accuracy of tablet splitting.

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  1. When Can Subscores Be Expected to Have Added Value? Results from Operational and Simulated Data. Research Report. ETS RR-10-16

    Sinharay, Sandip

    2010-01-01

    Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman (2008) suggested a method based on classical test theory to determine whether subscores have added value over total scores. This paper provides a literature review and reports when subscores were found to have added value for…

  2. Reticence, Accuracy and Efficacy

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  3. Accuracy and repeatability of quantitative fluoroscopy for the measurement of sagittal plane translation and finite centre of rotation in the lumbar spine.

    Breen, Alexander; Breen, Alan

    2016-07-01

    Quantitative fluoroscopy (QF) was developed to measure intervertebral mechanics in vivo and has been found to have high repeatability and accuracy for the measurement of intervertebral rotations. However, sagittal plane translation and finite centre of rotation (FCR) are potential measures of stability but have not yet been fully validated for current QF. This study investigated the repeatability and accuracy of QF for measuring these variables. Repeatability was assessed from L2-S1 in 20 human volunteers. Accuracy was investigated using 10 consecutive measurements from each of two pairs of linked and instrumented dry human vertebrae as reference; one which tilted without translation and one which translated without tilt. The results found intra- and inter-observer repeatability for translation to be 1.1mm or less (SEM) with fair to substantial reliability (ICC 0.533-0.998). Intra-observer repeatability of FCR location for inter-vertebral rotations of 5° and above ranged from 1.5mm to 1.8mm (SEM) with moderate to substantial reliability (ICC 0.626-0.988). Inter-observer repeatability for FCR ranged from 1.2mm to 5.7mm, also with moderate to substantial reliability (ICC 0.621-0.878). Reliability was substantial (ICC>0.81) for 10/16 measures for translation and 5/8 for FCR location. Accuracy for translation was 0.1mm (fixed centre) and 2.2mm (moveable centre), with an FCR error of 0.3mm(x) and 0.4mm(y) (fixed centre). This technology was found to have a high level of accuracy and with a few exceptions, moderate to substantial repeatability for the measurement of translation and FCR from fluoroscopic motion sequences. PMID:27129784

  4. ACCURACY AND FLUENCY IN COMMUNICATIVE LANGUAGE TEACHING

    2000-01-01

    Ⅰ. Introduction In English language teaching, at whatever level, teachers feel it very important to focus on accuracy and fluency in a pedagogic way. It is now widely accepted that neither of them should be focused on alone all the way through the teaching process. From our teaching experience, we can see that to some extent this is true.

  5. Particle distributions in approximately 10(13) - 10(16) eV air shower cores at mountain altitude and comparison with Monte Carlo simulations

    Ash, A. G.

    1985-01-01

    Photographs of 521 shower cores in an array of current-limited spark (discharge) chambers at Sacramento Peak (2900m above sea level, 730 g /sq cm.), New Mexico, U.S.A., have been analyzed and the results compared with similar data from Leeds (80m above sea level, 1020 g sq cm.). It was found that the central density differential spectrum is consistent with a power law index of -2 up to approx. 1500/sq m where it steepens, and that shower cores become flatter on average with increasing size. Scaling model predictions for proton primaries with a approx E sup -2.71 energy spectrum account well for the altitude dependence of the data at lower densities. However, deviations at higher densities indicate a change in hadron interaction characteristics between approx few x 10 to the 14th power and 10 to the 15th power eV primary energy causing particles close to the shower axis to be spread further out.

  6. Astrophysics with Microarcsecond Accuracy Astrometry

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  7. Current Concept of Geometrical Accuracy

    Görög Augustín; Görögová Ingrid

    2014-01-01

    Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutter´s geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the fiel...

  8. Back-propagation of accuracy

    Senashova, M. Yu.; Gorban, A. N.; Wunsch II, D. C.

    2003-01-01

    In this paper we solve the problem: how to determine maximal allowable errors, possible for signals and parameters of each element of a network proceeding from the condition that the vector of output signals of the network should be calculated with given accuracy? "Back-propagation of accuracy" is developed to solve this problem. The calculation of allowable errors for each element of network by back-propagation of accuracy is surprisingly similar to a back-propagation of error, because it is...

  9. Observer accuracy in reading chest films

    Four board-certified radiologists have read and reread four groups of 40 chest radiographs containing nodule and infiltrate images to investigate interreader and intrareader variation. Responses were recorded using a six-point confidence level scale. Accuracy was determined from the area under the receiver operating characteristic (ROC) curve. Accuracies ranged from 0.78 to 0.98, with an average of 0.83 and a mean uncertainty of 8.2%. Intraobserver uncertainties varied from 0.5% to 16%, with a mean of 5.9%. The data were analyzed for the significance of accuracy differences using correlated ROC techniques, the kappa statistic, and the Bonnferoni criteria. Implications for using reader performance as a recertification measure are discussed

  10. Achievable precision and accuracy of dose determinations from routine dosemeters

    The concepts of accuracy and precision as associated with dose determinations from routine dosemeters are analyzed. The factors which are most important when considering the accuracy of such measurements are then discussed. These include environmental conditions such as humidity, temperature, dose rate and time since irradiation. Some examples are presented. It is concluded that precision under identical irradiation conditions for reproducibility can be ± 2% at the 95% confidence level. The corresponding accuracy should not be more than ± 5%. (U.K.)

  11. Diagnostic Accuracy of Procalcitonin in Bacterial Meningitis Versus Nonbacterial Meningitis

    Wei, Ting-Ting; Hu, Zhi-De; Qin, Bao-Dong; Ma, Ning; Tang, Qing-Qin; Wang, Li-li; ZHOU, Lin; Zhong, Ren-Qian

    2016-01-01

    Abstract Several studies have investigated the diagnostic accuracy of procalcitonin (PCT) levels in blood or cerebrospinal fluid (CSF) in bacterial meningitis (BM), but the results were heterogeneous. The aim of the present study was to ascertain the diagnostic accuracy of PCT as a marker for BM detection. A systematic search of the EMBASE, Scopus, Web of Science, and PubMed databases was performed to identify studies published before December 7, 2015 investigating the diagnostic accuracy of ...

  12. Hypersecretion of the alpha-subunit in clinically non-functioning pituitary adenomas: Diagnostic accuracy is improved by adding alpha-subunit/gonadotropin ratio to levels of alpha-subunit

    Andersen, Marianne; Ganc-Petersen, Joanna; Jørgensen, Jens O L;

    2010-01-01

    In vitro, the majority of clinically non-functioning pituitary adenomas (NFPAs) produce gonadotropins or their alpha-subunit; however, in vivo, measurements of alpha-subunit levels may not accurately detect the hypersecretion of the alpha-subunit.......In vitro, the majority of clinically non-functioning pituitary adenomas (NFPAs) produce gonadotropins or their alpha-subunit; however, in vivo, measurements of alpha-subunit levels may not accurately detect the hypersecretion of the alpha-subunit....

  13. Hypersecretion of the alpha-subunit in clinically non-functioning pituitary adenomas: Diagnostic accuracy is improved by adding alpha-subunit/gonadotropin ratio to levels of alpha-subunit

    Andersen, Marianne; Ganc-Petersen, Joanna; Jørgensen, Jens Otto Lunde;

    2010-01-01

    when the alpha-ratios, rather than simply the alpha-subunit levels, were measured in patients with NFPAs. MATERIAL AND METHODS: Reference intervals for gonadotropin alpha-subunit serum levels and alpha-ratios were established in 231 healthy adults. The estimated cut-off limits were applied to 37...... patients with NFPAs. Gonadotropin alpha-subunit, LH and FSH levels were measured and alpha-ratios were calculated. RESULTS: In healthy adults, the cut-offs for alpha-subunit levels were significantly different between men and pre- and postmenopausal women: the cut-offs were 1.10, 0.48 and 3.76 IU....../l, respectively. Using these estimated cut-offs, increased alpha-subunit levels were identified in 10 out of 37 (27%) patients with NFPAs. By adding alpha-ratio, in combination with alpha-subunit levels, 23 patients out of 37 (62%) were identified as having elevated alpha-subunit hypersecretion, and 22 out of...

  14. Kalendar nedeli : 10 - 16 maja / Natalja Lesnaja

    Lesnaja, Natalja

    1995-01-01

    Iz soderzh.: 10 maja 235 let so dnja rozhdenia Kloda Zhozefa Ruzhe de Lilja, (1760 - 1836); 13 maja 155 let so dnja rozhdenia Alfonsa Dode, (1840 - 1897); 15 maja 105 let so dnja rozhdenia Ketrin Enn Porter, (1890 - 1980); 16 maja 85 let so dnja rozhdenia Olgi Fjodorovnõ Berggolts, (1910 - 1975)

  15. 19 CFR 10.16 - Assembly abroad.

    2010-04-01

    ... electrolytic capacitor is assembled abroad from American-made aluminum foil, paper, tape, and Mylar film. In... fastened to the surface to prevent these components from unwinding. Wire or other electric connectors are... can, and the ends closed with a protective washer. As imported, the capacitor is subject to the...

  16. Is increasing complexity of algorithms the price for higher accuracy? virtual comparison of three algorithms for tertiary level management of chronic cough in people living with HIV in a low-income country

    Mukabatsinda Constance

    2012-01-01

    Full Text Available Abstract Background The algorithmic approach to guidelines has been introduced and promoted on a large scale since the 1970s. This study aims at comparing the performance of three algorithms for the management of chronic cough in patients with HIV infection, and at reassessing the current position of algorithmic guidelines in clinical decision making through an analysis of accuracy, harm and complexity. Methods Data were collected at the University Hospital of Kigali (CHUK in a total of 201 HIV-positive hospitalised patients with chronic cough. We simulated management of each patient following the three algorithms. The first was locally tailored by clinicians from CHUK, the second and third were drawn from publications by Médecins sans Frontières (MSF and the World Health Organisation (WHO. Semantic analysis techniques known as Clinical Algorithm Nosology were used to compare them in terms of complexity and similarity. For each of them, we assessed the sensitivity, delay to diagnosis and hypothetical harm of false positives and false negatives. Results The principal diagnoses were tuberculosis (21% and pneumocystosis (19%. Sensitivity, representing the proportion of correct diagnoses made by each algorithm, was 95.7%, 88% and 70% for CHUK, MSF and WHO, respectively. Mean time to appropriate management was 1.86 days for CHUK and 3.46 for the MSF algorithm. The CHUK algorithm was the most complex, followed by MSF and WHO. Total harm was by far the highest for the WHO algorithm, followed by MSF and CHUK. Conclusions This study confirms our hypothesis that sensitivity and patient safety (i.e. less expected harm are proportional to the complexity of algorithms, though increased complexity may make them difficult to use in practice.

  17. Developing a Weighted Measure of Speech Sound Accuracy

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  18. Diagnostic accuracy in virtual dermatopathology

    Mooney, E.; Kempf, W.; Jemec, G.B.E.;

    2012-01-01

    slides and photomicrographs with corresponding clinical photographs and information in a self-assessment examination format. Descriptive data analysis and comparison of groups were performed using a chi-square test. Results Diagnostic accuracy in dermatopathology using virtual dermatopathology or...... diagnostic accuracy of dermatopathologists and pathologists using photomicrographs vs. digitized images, through a self-assessment examination, and to elucidate assessment of virtual dermatopathology. Methods Forty-five dermatopathologists and pathologists received a randomized combination of 15 virtual...

  19. Field Accuracy Test of Rpas Photogrammetry

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  20. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  1. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  2. Accuracy in x-ray reflectivity analysis

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Sormunen, J; Lipsanen, H [Micro and Nanosciences Laboratory, Helsinki University of Technology, Micronova, PO Box 3500, FI-02015 TKK (Finland)

    2007-12-07

    The influence of Poisson noise on the accuracy of x-ray reflectivity analysis is studied with an aluminium oxide (AlO) layer on silicon. A null hypothesis which argues that other than the exact solution gives the best fitness is examined with a statistical p-value test using a significance level of {alpha} = 0.01. Simulations are performed for a fit instead of a measurement since the exact error caused by noise cannot be determined from the measurement. The p-value is studied by comparing trial curves to 1000 'measurements', each of them including synthetic Poisson noise. Confidence limits for the parameters of Parratt's formalism and the Nevot-Croce approximation are determined in (mass density, surface roughness) (thickness, surface roughness) and (thickness, mass density) planes. The most significant result is that the thickness determination accuracy of AlO is approximately {+-}0.09 nm but the accuracy is better for materials having higher mass density. It is also shown that the accuracy of mass density determination can be significantly improved using a suitably designed fitness measure. Although the power of the presented method is demonstrated only in one case, it can be used in any parameter region for a plethora of single layer systems to find the lower limit of the error made in x-ray reflectivity analysis.

  3. Accuracy in x-ray reflectivity analysis

    The influence of Poisson noise on the accuracy of x-ray reflectivity analysis is studied with an aluminium oxide (AlO) layer on silicon. A null hypothesis which argues that other than the exact solution gives the best fitness is examined with a statistical p-value test using a significance level of α = 0.01. Simulations are performed for a fit instead of a measurement since the exact error caused by noise cannot be determined from the measurement. The p-value is studied by comparing trial curves to 1000 'measurements', each of them including synthetic Poisson noise. Confidence limits for the parameters of Parratt's formalism and the Nevot-Croce approximation are determined in (mass density, surface roughness) (thickness, surface roughness) and (thickness, mass density) planes. The most significant result is that the thickness determination accuracy of AlO is approximately ±0.09 nm but the accuracy is better for materials having higher mass density. It is also shown that the accuracy of mass density determination can be significantly improved using a suitably designed fitness measure. Although the power of the presented method is demonstrated only in one case, it can be used in any parameter region for a plethora of single layer systems to find the lower limit of the error made in x-ray reflectivity analysis

  4. Controlling the accuracy of chemical analysis

    Most of the IAEA reference materials are certified in intercomparisons by calculation of the overall mean of reported laboratory mean values. IAEA certification is provided at ''A level'' (satisfactory, or high degree of confidence), or at ''B level'' (acceptable, or reasonable degree of confidence) sampling , storage and preliminary processing, use of reliable analytical methods, internal and external control of accuracy and reliability result in excellent certified reference materials for inorganic, geologic, environmental, biological and other quantitative analysis by means of conventional and nuclear methods. 34 refs, 4 figs, 3 tabs

  5. The accuracy assessment in areal interpolation:An empirical investigation

    2008-01-01

    Areal interpolation is the process of transferring data from source zones to target zones. While method development remains a top research priority in areal interpo-lation,the accuracy assessment aspect also begs for attention. This paper reports an empirical experience on probing an areal interpolation method to highlight the power and potential pitfalls in accuracy assessment. A kriging-based interpolation algorithm is evaluated by several approaches. It is found that accuracy assessment is a powerful tool to understand an interpolation method,e.g. the utility of ancillary data and semi-variogram modeling in kriging in our case study. However,different assessment methods and spatial units on which assessment is conducted can lead to rather different results. The typical practice to assess accuracy at the source zone level may overestimate interpolation accuracy. Assessment at the target zone level is suggested as a supplement.

  6. Current Concept of Geometrical Accuracy

    Görög Augustín

    2014-06-01

    Full Text Available Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutter´s geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners. During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.

  7. Current Concept of Geometrical Accuracy

    Görög, Augustín; Görögová, Ingrid

    2014-06-01

    Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutteŕs geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.

  8. Accuracy of Occupational Stereotypes of Grade-Twelve Boys

    Banducci, Raymond

    1970-01-01

    The results of the study on accuracy of occupational stereotypes of grade twelve boys showed that students with high academic development had more acurate stereotypes of high level jobs than of low level jobs, and those with low academic development and low socioeconomic status had more accurate stereotypes of low level jobs. (Author)

  9. Improving Speaking Accuracy through Awareness

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  10. Radioactivity analysis of food and accuracy control

    From the fact that radioactive substances have been detected from the foods such as agricultural and livestock products and marine products due to the accident of the Fukushima Daiichi Nuclear Power Station of Tokyo Electric Power Company, the Ministry of Health, Labour and Welfare stipulated new standards geared to general foods on radioactive cesium by replacing the interim standards up to now. Various institutions began to measure radioactivity on the basis of this instruction, but as a new challenge, a problem of the reliability of the data occurred. Therefore, accuracy control to indicate the proof that the quality of the data can be retained at an appropriate level judging from an objective manner is important. In order to consecutively implement quality management activities, it is necessary for each inspection agency to build an accuracy control system. This paper introduces support service, as a new attempt, for establishing the accuracy control system. This service is offered jointly by three organizations, such as TUV Rheinland Japan Ltd., Japan Frozen Foods Inspection Corporation, and Japan Chemical Analysis Center. This service consists of the training of radioactivity measurement practitioners, proficiency test for radioactive substance measurement, and personal authentication. (O.A.)

  11. AMR, stability and higher accuracy

    Efforts to achieve better accuracy in numerical relativity have so far focused either on implementing second-order accurate adaptive mesh refinement or on defining higher order accurate differences and update schemes. Here, we argue for the combination, that is a higher order accurate adaptive scheme. This combines the power that adaptive gridding techniques provide to resolve fine scales (in addition to a more efficient use of resources) together with the higher accuracy furnished by higher order schemes when the solution is adequately resolved. To define a convenient higher order adaptive mesh refinement scheme, we discuss a few different modifications of the standard, second-order accurate approach of Berger and Oliger. Applying each of these methods to a simple model problem, we find these options have unstable modes. However, a novel approach to dealing with the grid boundaries introduced by the adaptivity appears stable and quite promising for the use of high order operators within an adaptive framework

  12. AMR, stability and higher accuracy

    Lehner, Luis [Department of Physics and Astronomy, Louisiana State University, 202 Nicholson Hall, Baton Rouge, Louisiana 70803-4001 (United States); Liebling, Steven L [Department of Physics, Long Island University-C W Post Campus, Brookville, New York 11548 (United States); Reula, Oscar [FaMAF, Universidad Nacional de Cordoba, Cordoba, 5000 (Argentina)

    2006-08-21

    Efforts to achieve better accuracy in numerical relativity have so far focused either on implementing second-order accurate adaptive mesh refinement or on defining higher order accurate differences and update schemes. Here, we argue for the combination, that is a higher order accurate adaptive scheme. This combines the power that adaptive gridding techniques provide to resolve fine scales (in addition to a more efficient use of resources) together with the higher accuracy furnished by higher order schemes when the solution is adequately resolved. To define a convenient higher order adaptive mesh refinement scheme, we discuss a few different modifications of the standard, second-order accurate approach of Berger and Oliger. Applying each of these methods to a simple model problem, we find these options have unstable modes. However, a novel approach to dealing with the grid boundaries introduced by the adaptivity appears stable and quite promising for the use of high order operators within an adaptive framework.

  13. GPS kinematics measurements accuracy testing

    Miroslav Šimčák; Vladimír Sedlák; Gabriela Nemcová

    2007-01-01

    In the paper accuracy of GPS kinematics measurements is analyzed. GPS (Global Positioning System) apparatus Stratus (Sokkia) and Pro Mark2 (Aschtech) were tested. Testing was realized on the points of the geodetic network – the testing station Badín stabilized in the Central Slovak Region nearby Banská Bystrica. The semikinematics method STOP and GO was realized from the kinematics GPS methods. The terrestrial geodetic measurements by means of using the total station Nicon 352 were also reali...

  14. Accuracy of the geodetic plans

    Kekec, Tomislav

    2011-01-01

    This thesis investigates whether it is possible to give an overall assessment of the accuracy of geodetic plan. The introductory part presents the legal basis of geodetic plan and its definition under the Regulations of Land Survey Maps and Topographic Key. The content of the geodetic plan and description of geodetic and surveying information sources which are the graphical part of geodetic plan are introduced in the second part. This part also describes the types of land survey plans, as the...

  15. Municipal water consumption forecast accuracy

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  16. Do Shared Interests Affect the Accuracy of Budgets?

    Ilse Maria Beuren; Franciele Beck; Fabiane Popik

    2015-01-01

    The creation of budgetary slack is a phenomenon associated with various behavioral aspects. This study focuses on accuracy in budgeting when the benefit of the slack is shared between the unit manager and his/her assistant. In this study, accuracy is measured by the level of slack in the budget, and the benefit of slack represents a financial consideration for the manager and the assistant. The study aims to test how shared interests in budgetary slack affect the accuracy of budget reports in...

  17. Precise Computation of Position Accuracy in GNSS Systems

    Garrido, Juan Pablo Boyero

    2011-01-01

    Accuracy and Availability computations for a GNSS System - or combination of Systems - through Service Volume Simulations take considerable time. Therefore, the computation of the accuracy in 2D and 3D are often simplified by an approximate solution. The drawback is that such simplifications can lead to accuracy results that are too conservative (up to 25% in the 2D case and up to 43% in the 3D case, for a 95% confidence level), which in turn translates into pessimistic System Availability. T...

  18. Required accuracy and dose thresholds in individual monitoring

    Christensen, P.; Griffith, R.V.

    specification of detailed accuracy requirements which are needed in practical routine monitoring. The ICRP overall accuracy requirement is defined as an allowable maximum uncertainty factor at the 95% confidence level for a single measurement of the relevant dose quantity, i.e. H(p)(10) and H(p)(0.07). From......The paper follows the approach given in recent revisions of CEC and IAEA recommendations on requirements in individual monitoring for external radiations. The ICRP requirements on overall accuracy for individual monitoring, as given in ICRP Publication 35 (1982), form the basis for the...

  19. Speed accuracy trade-off under response deadlines

    Balcı, Fuat; Karşılar, Hakan; Simen, Patrick; Papadakis, Samantha

    2014-01-01

    Abstract The majority of two-alternative forced choice (2AFC) psychophysics studies have examined speed-accuracy trade-offs either in free-response or fixed viewing time paradigms with no hard time constraints on responding. Under response deadlines, reward maximization requires participants to modulate decision thresholds over the course of a trial such that when the deadline arrives a response is ensured despite the possible reduction of accuracy to the chance level. Importantly, this no...

  20. On the accuracy of judgmental interventions on forecasting support systems

    Nikolopoulos, K.; Lawrence, M.; Goodwin, P; R A Fildes

    2005-01-01

    Forecasting at the Stock Keeping Unit (SKU) disaggregate level in order to support operations management has proved a very difficult task. The levels of accuracy achieved have major consequences for companies at all levels in the supply chain; errors at each stage are amplified resulting in poor service and overly high inventory levels. In most companies, the size and complexity of the forecasting task necessitates the use of Forecasting Support Systems (FSS). The present study examines month...

  1. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  2. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  3. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  4. Required accuracy and dose thresholds in individual monitoring

    Christensen, P.; Griffith, R.V.

    1994-01-01

    The paper follows the approach given in recent revisions of CEC and IAEA recommendations on requirements in individual monitoring for external radiations. The ICRP requirements on overall accuracy for individual monitoring, as given in ICRP Publication 35 (1982), form the basis for the...... specification of detailed accuracy requirements which are needed in practical routine monitoring. The ICRP overall accuracy requirement is defined as an allowable maximum uncertainty factor at the 95% confidence level for a single measurement of the relevant dose quantity, i.e. H(p)(10) and H(p)(0.07). From...... this uncertainty factor, a value of 21% can be evaluated for the allowable maximum overall standard deviation for dose measurements at dose levels near the annual dose limits increasing to 45% for dose levels at the lower end of the dose range required to be monitored. A method is described for...

  5. Accuracy of the blood pressure measurement.

    Rabbia, F; Del Colle, S; Testa, E; Naso, D; Veglio, F

    2006-08-01

    Blood pressure measurement is the cornerstone for the diagnosis, the treatment and the research on arterial hypertension, and all of the decisions about one of these single aspects may be dramatically influenced by the accuracy of the measurement. Over the past 20 years or so, the accuracy of the conventional Riva-Rocci/Korotkoff technique of blood pressure measurement has been questioned and efforts have been made to improve the technique with automated devices. In the same period, recognition of the phenomenon of white coat hypertension, whereby some individuals with an apparent increase in blood pressure have normal, or reduced, blood pressures when measurement is repeated away from the medical environment, has focused attention on methods of measurement that provide profiles of blood pressure behavior rather than relying on isolated measurements under circumstances that may in themselves influence the level of blood pressure recorded. These methodologies have included repeated measurements of blood pressure using the traditional technique, self-measurement of blood pressure in the home or work place, and ambulatory blood pressure measurement using innovative automated devices. The purpose of this review to serve as a source of practical information about the commonly used methods for blood pressure measurement: the traditional Riva-Rocci method and the automated methods. PMID:17016412

  6. Curation accuracy of model organism databases.

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819

  7. The IBIS / ISGRI Source Location Accuracy

    Gros, A; Soldi, S; Gotz, D; Caballero, I; Mattana, F; Heras, J A Zurita

    2013-01-01

    We present here results on the source location accuracy of the INTEGRAL IBIS/ISGRI coded mask telescope, based on ten years of INTEGRAL data and on recent developments in the data analysis procedures. Data were selected and processed with the new Off-line Scientific Analysis pipeline (OSA10.0) that benefits from the most accurate background corrections, the most performing coding noise cleaning and sky reconstruction algorithms available. We obtained updated parameters for the evaluation of the point source location error from the source signal to noise ratio. These results are compared to previous estimates and to theoretical expectations. Also thanks to a new fitting procedure the typical error at 90% confidence level for a source at a signal to noise of 10 is now estimated to be 1.5 arcmin. Prospects for future analysis on the Point Spread Function fitting procedure and on the evaluation of residual biases are also presented. The new consolidated parameters describing the source location accuracy that will...

  8. Enhancing Accuracy of Plant Leaf Classification Techniques

    C. S. Sumathi

    2014-03-01

    Full Text Available Plants have become an important source of energy, and are a fundamental piece in the puzzle to solve the problem of global warming. Living beings also depend on plants for their food, hence it is of great importance to know about the plants growing around us and to preserve them. Automatic plant leaf classification is widely researched. This paper investigates the efficiency of learning algorithms of MLP for plant leaf classification. Incremental back propagation, Levenberg–Marquardt and batch propagation learning algorithms are investigated. Plant leaf images are examined using three different Multi-Layer Perceptron (MLP modelling techniques. Back propagation done in batch manner increases the accuracy of plant leaf classification. Results reveal that batch training is faster and more accurate than MLP with incremental training and Levenberg– Marquardt based learning for plant leaf classification. Various levels of semi-batch training used on 9 species of 15 sample each, a total of 135 instances show a roughly linear increase in classification accuracy.

  9. Narrow-width approximation accuracy

    A study of general properties of the narrow-width approximation (NWA) with polarization/spin decorrelation is presented. We prove for sufficiently inclusive differential rates of arbitrary resonant decay or scattering processes with an on-shell intermediate state decaying via a cubic or quartic vertex that decorrelation effects vanish and the NWA is of order Γ. Its accuracy is then determined numerically for all resonant 3-body decays involving scalars, spin-1/2 fermions or vector bosons. We specialize the general results to MSSM benchmark scenarios. Significant off-shell corrections can occur - similar in size to QCD corrections. We qualify the configurations in which a combined consideration is advisable. For this purpose, we also investigate process-independent methods to improve the NWA

  10. Increasing Accuracy in Environmental Measurements

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  11. Accuracy - a market in radiotherapy. Reasons, requirements, clinical practice

    Accuracy requirements are specified in accordance with survival curves drawn up on the basis of clinical experience and data. Sigmoidal dose-response curves are established with the aid of the survival curves, giving information on tumour decline and the radiation effects induced in patients. In accordance with the ICRU report of 1984, Quality Assurance of External Beam Therapy, accuracy verification takes into account the two different criteria of tolerance level and action level. The dosimetric overall uncertainty is to be kept below 8 p.c. (DG)

  12. Data accuracy assessment using enterprise architecture

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  13. 广西壮族地区10~16岁学生儿童虐待状况调查分析%Investigation of child abuse in 10-16 years old students in Zhuang nationality areas of Guangxi

    刘萍; 李春灵; 宋媛; 蔡秋玲

    2014-01-01

    目的:了解壮族地区10~16岁学生儿童虐待流行状况,为探讨儿童虐待发生的影响因素、制定适合民族文化背景的预防干预措施提供参考依据。方法选取经济发展处于好、中、差水平的壮族聚居县各1个,抽取10~16岁中小学生共3936人。采用学生基本情况调查表收集学生个人及家庭基本情况。使用儿童受虐筛查表(SQCA)进行儿童虐待情况筛查。结果广西壮族地区中小学生儿童虐待(CA)筛查阳性率为29.3%。好、中、差经济水平县的 CA 筛查阳性率分别为22.8%、31.6%和33.9%,差别有统计学意义(P<0.01)。城乡 CA 筛查阳性率分别为30.7%和27.9%,差别无统计学意义(P >0.05);小学阶段和中学阶段CA筛查阳性率分别为41.8%和17.7%,差别有统计学意义(P<0.05);男女生CA筛查阳性率分别为30.8%和27.8%,差别有统计学意义(P<0.05)。结论广西壮族地区学生有随着年龄增长虐待筛查阳性率下降的趋势,儿童虐待可能与地区经济发展水平有关。%Objective To understand the epidemiological features and associated factors of child abuse in students aged 10-16 years old in Zhuang nationality areas of Guangxi, to explore intervence measures for prevention child abuse with special national culture. Method Students aged 10~16 years old were sampled from each county of high, middle and low economic level, which were dominated by Zhuang population. Questionnaire suvery was conducted on a total number of 3 936 students for personal and family general information. The child abuse satus was screened by the Screen Questionnaire of Child Abuse (SQCA). Results The prevalence of child abuse in sampled students was 29.3%. The prevalences of child abuse in high, middle and low economic level county were 22.8%, 31.6% and 33.9%, respectively; there were statistical significance among them(P0.05). The prevalences of child abuse in primary school

  14. Dosimetric accuracy of a staged radiosurgery treatment

    Cernica, George; de Boer, Steven F.; Diaz, Aidnag; Fenstermaker, Robert A.; Podgorsak, Matthew B.

    2005-05-01

    For large cerebral arteriovenous malformations (AVMs), the efficacy of radiosurgery is limited since the large doses necessary to produce obliteration may increase the risk of radiation necrosis to unacceptable levels. An alternative is to stage the radiosurgery procedure over multiple stages (usually two), effectively irradiating a smaller volume of the AVM nidus with a therapeutic dose during each session. The difference between coordinate systems defined by sequential stereotactic frame placements can be represented by a translation and a rotation. A unique transformation can be determined based on the coordinates of several fiducial markers fixed to the skull and imaged in each stereotactic coordinate system. Using this transformation matrix, isocentre coordinates from the first stage can be displayed in the coordinate system of subsequent stages allowing computation of a combined dose distribution covering the entire AVM. The accuracy of this approach was tested on an anthropomorphic head phantom and was verified dosimetrically. Subtle defects in the phantom were used as control points, and 2 mm diameter steel balls attached to the surface were used as fiducial markers and reference points. CT images (2 mm thick) were acquired. Using a transformation matrix developed with two frame placements, the predicted locations of control and reference points had an average error of 0.6 mm near the fiducial markers and 1.0 mm near the control points. Dose distributions in a staged treatment approach were accurately calculated using the transformation matrix. This approach is simple, fast and accurate. Errors were small and clinically acceptable for Gamma Knife radiosurgery. Accuracy can be improved by reducing the CT slice thickness.

  15. Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.

    Krause, Jean C; Tessler, Morgan P

    2016-10-01

    Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370

  16. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au [Westmead Breast Cancer Institute, Westmead, New South Wales (Australia); Poulos, Ann E [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Lidcombe, New South Wales (Australia); Houssami, Nehmat [Screening and Test Evaluation Program, School of Public Health (A27), Sydney Medical School, University of Sydney, Sydney, New South Wales (Australia); Turner, Robin M [School of Public Health and Community Medicine, University of New South Wales, Sydney, New South Wales (Australia); Boyages, John [Macquarie University Cancer Institute, Macquarie University Hospital, Australian School of Advanced Medicine, Macquarie University, Sydney, New South Wales (Australia); Westmead Breast Cancer Institute, Westmead, New South Wales (Australia)

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  17. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes

  18. Tracking accuracy assessment for concentrator photovoltaic systems

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  19. Accuracy of GIPSY PPP from a denser network

    Gokhan Hayal, Adem; Ugur Sanli, Dogan

    2015-04-01

    Researchers need to know about the accuracy of GPS for the planning of their field survey and hence to obtain reliable positions as well as deformation rates. Geophysical applications such as monitoring of development of a fault creep or of crustal motion for global sea level rise studies necessitate the use of continuous GPS whereas applications such as determining co-seismic displacements where permanent GPS sites are sparsely scattered require the employment of episodic campaigns. Recently, real time applications of GPS in relation to the early prediction of earthquakes and tsunamis are in concern. Studying the static positioning accuracy of GPS has been of interest to researchers for more than a decade now. Various software packages and modeling strategies have been tested so far. Relative positioning accuracy was compared with PPP accuracy. For relative positioning, observing session duration and network geometry of reference stations appear to be the dominant factors on GPS accuracy whereas observing session duration seems to be the only factor influencing the PPP accuracy. We believe that latest developments concerning the accuracy of static GPS from well-established software will form a basis for the quality of GPS field works mentioned above especially for real time applications which are referred to more frequently nowadays. To assess the GPS accuracy, conventionally some 10 to 30 regionally or globally scattered networks of GPS stations are used. In this study, we enlarge the size of GPS network up to 70 globally scattered IGS stations to observe the changes on our previous accuracy modeling which employed only 13 stations. We use the latest version 6.3 of GIPSY/OASIS II software and download the data from SOPAC archives. Noting the effect of the ionosphere on our previous accuracy modeling, here we selected the GPS days through which the k-index values are lower than 4. This enabled us to extend the interval of observing session duration used for the

  20. Do Shared Interests Affect the Accuracy of Budgets?

    Ilse Maria Beuren

    2015-04-01

    Full Text Available The creation of budgetary slack is a phenomenon associated with various behavioral aspects. This study focuses on accuracy in budgeting when the benefit of the slack is shared between the unit manager and his/her assistant. In this study, accuracy is measured by the level of slack in the budget, and the benefit of slack represents a financial consideration for the manager and the assistant. The study aims to test how shared interests in budgetary slack affect the accuracy of budget reports in an organization. To this end, an experimental study was conducted with a sample of 90 employees in management and other leadership positions at a cooperative that has a variable compensation plan based on the achievement of organizational goals. The experiment conducted in this study is consubstantiated by the study of Church, Hannan and Kuang (2012, which was conducted with a sample of undergraduate students in the United States and used a quantitative approach to analyze the results. In the first part of the experiment, the results show that when budgetary slack is not shared, managers tend to create greater slack when the assistant is not aware of the creation of slack; these managers thus generate a lower accuracy index than managers whose assistants are aware of the creation of slack. When budgetary slack is shared, there is higher average slack when the assistant is aware of the creation of slack. In the second part of the experiment, the accuracy index is higher for managers who prepare the budget with the knowledge that their assistants prefer larger slack values. However, the accuracy level differs between managers who know that their assistants prefer maximizing slack values and managers who do not know their assistants' preference regarding slack. These results contribute to the literature by presenting evidence of managers' behavior in the creation of budgetary slack in scenarios in which they share the benefits of slack with their assistants.

  1. Diagnostic Accuracy of Procalcitonin in Bacterial Meningitis Versus Nonbacterial Meningitis

    Wei, Ting-Ting; Hu, Zhi-De; Qin, Bao-Dong; Ma, Ning; Tang, Qing-Qin; Wang, Li-Li; Zhou, Lin; Zhong, Ren-Qian

    2016-01-01

    Abstract Several studies have investigated the diagnostic accuracy of procalcitonin (PCT) levels in blood or cerebrospinal fluid (CSF) in bacterial meningitis (BM), but the results were heterogeneous. The aim of the present study was to ascertain the diagnostic accuracy of PCT as a marker for BM detection. A systematic search of the EMBASE, Scopus, Web of Science, and PubMed databases was performed to identify studies published before December 7, 2015 investigating the diagnostic accuracy of PCT for BM. The quality of the eligible studies was assessed using the revised Quality Assessment for Studies of Diagnostic Accuracy method. The overall diagnostic accuracy of PCT detection in CSF or blood was pooled using the bivariate model. Twenty-two studies involving 2058 subjects were included in this systematic review and meta-analysis. The overall specificities and sensitivities were 0.86 and 0.80 for CSF PCT, and 0.97 and 0.95 for blood PCT, respectively. Areas under the summary receiver operating characteristic curves were 0.90 and 0.98 for CSF PCT and blood PCT, respectively. The major limitation of this systematic review and meta-analysis was the small number of studies included and the heterogeneous diagnostic thresholds adopted by eligible studies. Our meta-analysis shows that PCT is a useful biomarker for BM diagnosis. PMID:26986140

  2. Audiovisual biofeedback improves motion prediction accuracy

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction.

  3. Accuracy analysis of distributed simulation systems

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  4. Optimizing the geometrical accuracy of curvilinear meshes

    Toulorge, Thomas; Remacle, Jean-François

    2015-01-01

    This paper presents a method to generate valid high order meshes with optimized geometrical accuracy. The high order meshing procedure starts with a linear mesh, that is subsequently curved without taking care of the validity of the high order elements. An optimization procedure is then used to both untangle invalid elements and optimize the geometrical accuracy of the mesh. Standard measures of the distance between curves are considered to evaluate the geometrical accuracy in planar two-dimensional meshes, but they prove computationally too costly for optimization purposes. A fast estimate of the geometrical accuracy, based on Taylor expansions of the curves, is introduced. An unconstrained optimization procedure based on this estimate is shown to yield significant improvements in the geometrical accuracy of high order meshes, as measured by the standard Haudorff distance between the geometrical model and the mesh. Several examples illustrate the beneficial impact of this method on CFD solutions, with a part...

  5. Evaluation on the accuracy of digital elevation models

    2001-01-01

    There is a growing interest in investigating the accuracy of digital elevation model (DEM). However people usually have an unbalanced view on DEM errors. They emphasize DEM sampling errors, but ignore the impact of DEM resolution and terrain roughness on the accuracy of terrain representation. This research puts forward the concept of DEM terrain representation error (Et) and then investigates the generation, factors, measurement and simulation of DEM terrain representation errors. A multi-resolution and multi-relief comparative approach is used as the major methodology in this research. The experiment reveals a quantitative relationship between the error and the variation of resolution and terrain roughness at a global level. Root mean square error (RMS Et) is regressed against surface profile curvature (V) and DEM resolution (R) at 10 resolution levels. It is found that the RMS Et may be expressed as RMS Et = (0.0061· V+ 0.0052) . R - 0.022·V +0.2415. This result may be very useful in forecasting DEM accuracy, as well as in determining the DEM resolution related to the accuracy requirement of particular application.

  6. Noise levels in Damascus city

    Outdoor noise levels were measured at 22 sites in Damascus city. Sound level meter model NC-10 with a 20-140 dBA selectable range was used in the current investigation. At each site noise data were collected from 7 to 21 o'clock. The results showed that the noise levels were higher than WHO (World Health Organization) standard by 5-24.7 dB, 10-16 dB, 10-11 dB and 12-17 dB in residential, commercial, Commercial-industrial, and Heavy traffic streets respectively. Indoor and outdoor noise levels in some hospitals were higher than WHO standard by 15-28 dB and 19-23 dB respectively. The study showed that the authorities administration must take necessary procedures to reduce the noise levels in residential regions and in the regions surrounding the hospitals. (author)

  7. Accuracy and consistency of modern elastomeric pumps.

    Weisman, Robyn S; Missair, Andres; Pham, Phung; Gutierrez, Juan F; Gebhard, Ralf E

    2014-01-01

    Continuous peripheral nerve blockade has become a popular method of achieving postoperative analgesia for many surgical procedures. The safety and reliability of infusion pumps are dependent on their flow rate accuracy and consistency. Knowledge of pump rate profiles can help physicians determine which infusion pump is best suited for their clinical applications and specific patient population. Several studies have investigated the accuracy of portable infusion pumps. Using methodology similar to that used by Ilfeld et al, we investigated the accuracy and consistency of several current elastomeric pumps. PMID:25140510

  8. ACCURACY OF DIVIDEND DISCOUNT MODEL VALUATION AT MACEDONIAN STOCK- EXCHANGE

    Zoran Ivanovski; Zoran Narasanov; Nadica Ivanovska

    2015-01-01

    Many analysts believed that Dividend Discount Model (DDM) is obsolete, but much of the intuition that drives discounted cash flow (DCF) valuation is embedded in the DDM model. The basic task of these research is to test DDM valuation models accuracy at Macedonian Stock Exchange (MSE) as emerging market by analyzing two “blue-chip” stocks, one from banking sector and other from industry. The descriptive statistics and regression analysis were used to determine the level of correlation between ...

  9. Improving Coverage Accuracy of Block Bootstrap Confidence Intervals

    Lee, Stephen M. S.; Lai, P. Y.

    2008-01-01

    The block bootstrap confidence interval based on dependent data can outperform the computationally more convenient normal approximation only with non-trivial Studentization which, in the case of complicated statistics, calls for highly specialist treatment. We propose two different approaches to improving the accuracy of the block bootstrap confidence interval under very general conditions. The first calibrates the coverage level by iterating the block bootstrap. The second calculates Student...

  10. Executive Functioning and Memory as Potential Mediators of the Episodic Feeling-of-Knowing Accuracy

    Perrotin, Audrey; Tournelle, Lydia; Isingrini, Michel

    2008-01-01

    The study focused on the cognitive determinants of the accuracy of feeling-of-knowing (FOK) judgments made on episodic memory information. An individual differences approach was used on a sample of healthy older adults assessed on an episodic FOK task and on several neuropsychological measures. At a global level of analysis of FOK accuracy, the…

  11. The analysis accuracy assessment of CORINE land cover in the Iberian coast

    Romano Grullón, Ramona Yraida; Alhaddad, Bahaa Eddin; Roca Cladera, Josep

    2009-01-01

    OBJECTIVES: 1. Evaluate databases accuracy of Corine land cover; 2. Present methods to test map error which help to explain the observed differences between various categories of land covers; 3. Explain the observed differences between various data sources in different scale levels; 4. Calculate the accuracy assessments. Peer Reviewed

  12. Hostility and Facial Affect Recognition: Effects of a Cold Pressor Stressor on Accuracy and Cardiovascular Reactivity

    Herridge, Matt L.; Harrison, David W.; Mollet, Gina A.; Shenal, Brian V.

    2004-01-01

    The effects of hostility and a cold pressor stressor on the accuracy of facial affect perception were examined in the present experiment. A mechanism whereby physiological arousal level is mediated by systems which also mediate accuracy of an individual's interpretation of affective cues is described. Right-handed participants were classified as…

  13. Accuracy and Efficiency of Raytracing Photoionisation Algorithms

    Mackey, Jonathan

    2012-01-01

    Three non-equilibrium photoionisation algorithms for hydrodynamical grid-based simulation codes are compared in terms of accuracy, timestepping criteria, and parallel scaling. Explicit methods with first order time accuracy for photon conservation must use very restrictive timestep criteria to accurately track R-type ionisation fronts. A second order accurate algorithm is described which, although it requires more work per step, allows much longer timesteps and is consequently more efficient. Implicit methods allow ionisation fronts to cross many grid cells per timestep while maintaining photon conservation accuracy. It is shown, however, that errors are much larger for multi-frequency radiation then for monochromatic radiation with the implicit algorithm used here, and large errors accrue when an ionisation front crosses many optical depths in a single step. The accuracy and convergence rates of the different algorithms are tested with a large number of timestepping criteria to identify the best criterion fo...

  14. Critical thinking and accuracy of nurses' diagnoses.

    Lunney, Margaret

    2003-01-01

    Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities. PMID:14649031

  15. Required accuracy and dose thresholds in individual monitoring

    The paper follows the approach given in recent revisions of CEC and IAEA recommendations on requirements in individual monitoring for external radiations. The ICRP requirements on overall accuracy for individual monitoring, as given in ICRP Publications 35 (1982), form the basis for the specification of detailed accuracy requirements which are needed in practical routine monitoring. The ICRP overall accuracy requirement is defined as an allowable maximum uncertainty factor at the 95% confidence level for a single measurement of the relevant dose quantity, i.e. Hp(10) and Hp(0.07). From this uncertainty factor, a value of 21% can be evaluated for the allowable maximum overall standard deviation for dose measurements at dose levels near the annual dose limits increasing to 45% for dose levels at the lower end of the dose range required to be monitored. A method is described for evaluating the overall standard deviation of the dosimetry system by combining random and systematic uncertainties in quadrature, and procedures are also given for determining each individual uncertainty connected to the dose measurement. In particular, attention is paid to the evaluation of the combined uncertainty due to energy and angular dependencies of the dosemeter. (Author)

  16. Precise extension-mode resonant sensor with uniform and repeatable sensitivity for detection of ppm-level ammonia

    This paper presents a micromechanical resonant gas sensor, which is operated in extensional bulk mode, for high-accuracy sensing of ultra-low concentration chemical vapors. The designed and fabricated gravimetric resonant microsensor has exhibited both high Q-factor of 11157 in air and high mass sensitivity of 10.16 Hz pg−1. Much more importantly, both theoretical analysis and sensing experiment have verified that such an extensional resonance mode is technically advantageous in sensing accuracy (e.g., output signal linearity in terms of gas concentration and reproducibility of sensitivity), which can be attributed to the uniform vibration amplitude at each point of the gas-adsorbing area. Loaded with –COOH functionalized mesoporous-silica nanopaticles as sensing material, which features an ultra-large surface area, the chemical sensing micro-resonator experimentally performs precise detection performance to ppm-level ammonia vapor. The limit of detection is as low as 1 ppm and the response time of the sensor is as short as 10 s. (paper)

  17. Moyamoya disease: diagnostic accuracy of MRI

    MRI may be employed to investigate moyamoya disease, since it provides vascular information without use of contrast medium. We reported the usefulness and limitations of MR angiography (MRA) in moyamoya disease. To our knowledge, no report has appeared dealing with the diagnostic accuracy of MRI in a large number of cases of moyamoya disease, although MRI is used more commonly than MRA. We therefore undertook to evaluate the accuracy of MRI in moyamoya disease. (orig.)

  18. Coding accuracy on the psychophysical scale

    Lubomir Kostal; Petr Lansky

    2016-01-01

    Sensory neurons are often reported to adjust their coding accuracy to the stimulus statistics. The observed match is not always perfect and the maximal accuracy does not align with the most frequent stimuli. As an alternative to a physiological explanation we show that the match critically depends on the chosen stimulus measurement scale. More generally, we argue that if we measure the stimulus intensity on the scale which is proportional to the perception intensity, an improved adjustment in...

  19. Development of an artillery accuracy model

    Fann, Chee Meng.

    2006-01-01

    This thesis explains the methodologies that predict the trajectory and accuracy of an unguided, indirect-fire launched projectile in predicted fire. The trajectory is the path that a projectile travels to the impact point, while the accuracy is the measurement of the deviation of the impact point from the target. In addition, this thesis describes, the methodology for calculating the various factors such as drag and drift in the trajectory calculation. A three degree of freedom model will...

  20. Factors Influencing Science Content Accuracy in Elementary Inquiry Science Lessons

    Nowicki, Barbara L.; Sullivan-Watts, Barbara; Shim, Minsuk K.; Young, Betty; Pockalny, Robert

    2013-06-01

    Elementary teachers face increasing demands to engage children in authentic science process and argument while simultaneously preparing them with knowledge of science facts, vocabulary, and concepts. This reform is particularly challenging due to concerns that elementary teachers lack adequate science background to teach science accurately. This study examined 81 in-classroom inquiry science lessons for preservice education majors and their cooperating teachers to determine the accuracy of the science content delivered in elementary classrooms. Our results showed that 74 % of experienced teachers and 50 % of student teachers presented science lessons with greater than 90 % accuracy. Eleven of the 81 lessons (9 preservice, 2 cooperating teachers) failed to deliver accurate science content to the class. Science content accuracy was highly correlated with the use of kit-based resources supported with professional development, a preference for teaching science, and grade level. There was no correlation between the accuracy of science content and some common measures of teacher content knowledge (i.e., number of college science courses, science grades, or scores on a general science content test). Our study concluded that when provided with high quality curricular materials and targeted professional development, elementary teachers learn needed science content and present it accurately to their students.

  1. Accuracy Assessment Points for Valley Forge National Historical Park Vegetation Mapping Project

    National Park Service, Department of the Interior — This shapefile includes the accuracy assessment points used to assess the association-level vegetation map of Valley Forge National Historic Park developed by the...

  2. Eyewitness memory of a supermarket robbery: a case study of accuracy and confidence after 3 months.

    Odinot, Geralda; Wolters, Gezinus; van Koppen, Peter J

    2009-12-01

    In this case study, 14 witnesses of an armed robbery were interviewed after 3 months. Security camera recordings were used to assess memory accuracy. Of all information that could be remembered about 84% was correct. Although accurately recalled information had a higher confidence level on average than inaccurately recalled information, the mean accuracy-confidence correlation was rather modest (0.38). These findings indicate that confidence is not a reliable predictor of accuracy. A higher level of self-reported, post-event thinking about the incident was associated with higher confidence levels, while a higher level of self-reported emotional impact was associated with greater accuracy. A potential source of (mis)information, a reconstruction of the robbery broadcasted on TV, did not alter the original memories of the witnesses. PMID:18719983

  3. USE OF CHEMICAL INVENTORY ACCURACY MEASUREMENTS AS LEADING INDICATORS

    Kuntamukkula, M.

    2011-02-10

    Chemical safety and lifecycle management (CSLM) is a process that involves managing chemicals and chemical information from the moment someone begins to order a chemical and lasts through final disposition(1). Central to CSLM is tracking data associated with chemicals which, for the purposes of this paper, is termed the chemical inventory. Examples of data that could be tracked include chemical identity, location, quantity, date procured, container type, and physical state. The reason why so much data is tracked is that the chemical inventory supports many functions. These functions include emergency management, which depends upon the data to more effectively plan for, and respond to, chemical accidents; environmental management that uses inventory information to aid in the generation of various federally-mandated and other regulatory reports; and chemical management that uses the information to increase the efficiency and safety with which chemicals are stored and utilized. All of the benefits of having an inventory are predicated upon having an inventory that is reasonably accurate. Because of the importance of ensuring one's chemical inventory is accurate, many have become concerned about measuring inventory accuracy. But beyond providing a measure of confidence in information gleaned from the inventory, does the inventory accuracy measurement provide any additional function? The answer is 'Yes'. It provides valuable information that can be used as a leading indicator to gauge the health of a chemical management system. In this paper, we will discuss: what properties make leading indicators effective, how chemical inventories can be used as a leading indicator, how chemical inventory accuracy can be measured, what levels of accuracies should realistically be expected in a healthy system, and what a subpar inventory accuracy measurement portends.

  4. Accuracy Assessment and Analysis for GPT2

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  5. Activity monitor accuracy in persons using canes

    Deborah Michael Wendland, PT, DPT, CPed

    2012-12-01

    Full Text Available The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6 who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001 when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use.

  6. Accuracy of Trained Canines for Detecting Bed Bugs (Hemiptera: Cimicidae).

    Cooper, Richard; Wang, Changlu; Singh, Narinderpal

    2014-12-01

    Detection of low-level bed bug, Cimex lectularius L. (Hemiptera: Cimicidae), infestations is essential for early intervention, confirming eradication of infestations, and reducing the spread of bed bugs. Despite the importance of detection, few effective tools and methods exist for detecting low numbers of bed bugs. Scent dogs were developed as a tool for detecting bed bugs in recent years. However, there are no data demonstrating the reliability of trained canines under natural field conditions. We evaluated the accuracy of 11 canine detection teams in naturally infested apartments. All handlers believed their dogs could detect infestations at a very high rate (≥95%). In three separate experiments, the mean (min, max) detection rate was 44 (10-100)% and mean false-positive rate was 15 (0-57)%. The false-positive rate was positively correlated with the detection rate. The probability of a bed bug infestation being detected by trained canines was not associated with the level of bed bug infestations. Four canine detection teams evaluated on multiple days were inconsistent in their ability to detect bed bugs and exhibited significant variance in accuracy of detection between inspections on different days. There was no significant relationship between the team's experience or certification status of teams and the detection rates. These data suggest that more research is needed to understand factors affecting the accuracy of canine teams for bed bug detection in naturally infested apartments. PMID:26470083

  7. Efficient evaluation of accuracy of molecular quantum dynamics using dephasing representation

    Li, Baiqing; Mollica, Cesare; Vanicek, Jiri

    2009-01-01

    Ab initio methods for the electronic structure of molecules have reached a satisfactory accuracy for calculations of static properties but remain too expensive for quantum dynamics calculations. We propose an efficient semiclassical method for evaluating the accuracy of a lower level quantum dynamics, as compared to a higher level quantum dynamics, without having to perform any quantum dynamics. The method is based on the dephasing representation of quantum fidelity and its fe...

  8. Decreased interoceptive accuracy following social exclusion.

    Durlik, Caroline; Tsakiris, Manos

    2015-04-01

    The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592

  9. Coordinate metrology accuracy of systems and measurements

    Sładek, Jerzy A

    2016-01-01

    This book focuses on effective methods for assessing the accuracy of both coordinate measuring systems and coordinate measurements. It mainly reports on original research work conducted by Sladek’s team at Cracow University of Technology’s Laboratory of Coordinate Metrology. The book describes the implementation of different methods, including artificial neural networks, the Matrix Method, the Monte Carlo method and the virtual CMM (Coordinate Measuring Machine), and demonstrates how these methods can be effectively used in practice to gauge the accuracy of coordinate measurements. Moreover, the book includes an introduction to the theory of measurement uncertainty and to key techniques for assessing measurement accuracy. All methods and tools are presented in detail, using suitable mathematical formulations and illustrated with numerous examples. The book fills an important gap in the literature, providing readers with an advanced text on a topic that has been rapidly developing in recent years. The book...

  10. Training in timing improves accuracy in golf.

    Libkuman, Terry M; Otani, Hajime; Steger, Neil

    2002-01-01

    In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497

  11. High Accuracy and Real-Time Gated Viewing Laser Radar

    Dong Li; Hua-Jun Yang; Shan-Pei Zhou

    2011-01-01

    A gated viewing laser radar has an excellent performance in underwater low light level imaging,and it also provides a viable solution to inhibit backscattering.In this paper,a gated viewing imaging system according to the demand for real-time imaging is presented,and then the simulation is used to analyze the performance of the real-time gated viewing system.The range accuracy performance is limited by the slice number,the width of gate,the delay time step,the initial delay time,as well as the system noise and atmospheric turbulence.The simulation results indicate that the highest range accuracy can be achieved when the system works with the optimal parameters.Finally,how to choose the optimal parameters has been researched.

  12. Location Update Accuracy in Human Tracking system using Zigbee modules

    Amutha, B

    2009-01-01

    A location and tracking system becomes very important to our future world of pervasive computing. An algorithm for accurate location information is being incorporated in the human walking model and in the blind human walking model. We want to implement an accurate location tracking mechanism using Zigbee along with GPS, we have incorporated Markov chain algorithm for establishing accuracy. Normal Human and blind human walking steps were actually taken in the known environment within our campus and the Markov chain algorithm was used for smoothening the stepwise variation in location updates. A comparison module is also implemented to show the difference between normal human and blind human walking step variations. This accuracy is used for designing a blind tracking device so that the device can be used by the blind for finding the path without obstacles. We present a system level approach to localizing and tracking Human and blind users on a basis of different sources of location information [GPS plus Zigbee...

  13. Final Technical Report: Increasing Prediction Accuracy.

    King, Bruce Hardison [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  14. Systematic reviews of diagnostic test accuracy

    Leeflang, Mariska M G; Deeks, Jonathan J; Gatsonis, Constantine;

    2008-01-01

    More and more systematic reviews of diagnostic test accuracy studies are being published, but they can be methodologically challenging. In this paper, the authors present some of the recent developments in the methodology for conducting systematic reviews of diagnostic test accuracy studies....... Restrictive electronic search filters are discouraged, as is the use of summary quality scores. Methods for meta-analysis should take into account the paired nature of the estimates and their dependence on threshold. Authors of these reviews are advised to use the hierarchical summary receiver...

  15. Accuracy analysis of automatic distortion correction

    Kolecki Jakub

    2015-06-01

    Full Text Available The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models.

  16. ACCURACY ANALYSIS OF KINECT DEPTH DATA

    K. Khoshelham

    2012-09-01

    Full Text Available This paper presents an investigation of the geometric quality of depth data obtained by the Kinect sensor. Based on the mathematical model of depth measurement by the sensor a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimetres up to about 4 cm at the maximum range of the sensor. The accuracy of the data is also found to be influenced by the low resolution of the depth measurements.

  17. Electron impact excitation out of the metastable levels of argon into the 3p54p J = 3 level

    We have measured the direct cross section for electron impact excitation out of the metastable 3p54s[3/2]20 level (1s5 in Paschen's notation) into the 3p54p[5/2]3 level (2p9) of argon from threshold to 800 eV. The direct cross section is 40 x 10-16 cm2 at 10 eV. (author)

  18. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  19. Accuracy of References in Five Entomology Journals.

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  20. Bullet trajectory reconstruction - Methods, accuracy and precision.

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  1. 47 CFR 65.306 - Calculation accuracy.

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  2. Accuracy of sampling during mushroom cultivation

    Baars, J.J.P.; Hendrickx, P.M.; Sonnenberg, A.S.M.

    2015-01-01

    Experiments described in this report were performed to increase the accuracy of the analysis of the biological efficiency of Agaricus bisporus strains. Biological efficiency is a measure of the efficiency with which the mushroom strains use dry matter in the compost to produce mushrooms (expressed as dry matter produced).

  3. High Accuracy Transistor Compact Model Calibrations

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  4. Accuracy of abdominal auscultation for bowel obstruction

    Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge

    2015-01-01

    AIM: To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. METHODS: Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann(®) Electronic Stethoscope. The recordings were processed...

  5. The Diagnostic Accuracy of Digitized Mammography

    M. Guiti

    2008-06-01

    Full Text Available Background/Objective: Digitized mammography has several advantages over screen-film radiography in data storage and retrieval, making it a useful alternative to screen-film mammography in screening programs. The purpose of this study was to determine the diagnostic accuracy of digitized mammography in detecting breast cancer. "nPatients and Methods: 185 women (845 Images were digitized at 600 dpi. All images were reviewed by an expert radiologist. The mammograms were scored on a scale of breast imaging reporting and data system (BIRADS. The definite diagnosis was made either on the pathologic results of breast biopsy, or upon the follow-up of at least one year. The overall diagnostic accuracy of digitized mammography was calculated by the area under receiver operating characteristic curve."nResults: 242 sets of mammograms had no lesions. The total counts of masses, microcalcifications or both in one breast were 39 (11%, 42 (12%, and 25 (7%, respectively. There were 321 (92% benign and 27 (8% definite malignant lesions. The diagnostic accuracy of digitized images was 96.34% (95% CI: 94%-98%."nConclusion: The diagnostic accuracy of digitized mammography is comparably good or even better than the published results. The digitized mammography is a good substitute modality for screen-film mammography in screening programs.

  6. Observed Consultation: Confidence and Accuracy of Assessors

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  7. Accuracy in Robot Generated Image Data Sets

    Aanæs, Henrik; Dahl, Anders Bjorholm

    2015-01-01

    In this paper we present a practical innovation concerning how to achieve high accuracy of camera positioning, when using a 6 axis industrial robots to generate high quality data sets for computer vision. This innovation is based on the realization that to a very large extent the robots positioning...... in using robots for image data set generation....

  8. Direct Behavior Rating: Considerations for Rater Accuracy

    Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.

    2014-01-01

    Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…

  9. Bayesian Methods for Medical Test Accuracy

    Lyle D. Broemeling

    2011-05-01

    Full Text Available Bayesian methods for medical test accuracy are presented, beginning with the basic measures for tests with binary scores: true positive fraction, false positive fraction, positive predictive values, and negative predictive value. The Bayesian approach is taken because of its efficient use of prior information, and the analysis is executed with a Bayesian software package WinBUGS®. The ROC (receiver operating characteristic curve gives the intrinsic accuracy of medical tests that have ordinal or continuous scores, and the Bayesian approach is illustrated with many examples from cancer and other diseases. Medical tests include X-ray, mammography, ultrasound, computed tomography, magnetic resonance imaging, nuclear medicine and tests based on biomarkers, such as blood glucose values for diabetes. The presentation continues with more specialized methods suitable for measuring the accuracies of clinical studies that have verification bias, and medical tests without a gold standard. Lastly, the review is concluded with Bayesian methods for measuring the accuracy of the combination of two or more tests.

  10. The impact of accuracy motivation on interpretation, comparison, and correction processes: accuracy x knowledge accessibility effects.

    Stapel, D A; Koomen, W; Zeelenberg, M

    1998-04-01

    Four studies provide evidence for the notion that there may be boundaries to the extent to which accuracy motivation may help perceivers to escape the influence of fortuitously activated information. Specifically, although accuracy motivations may eliminate assimilative accessibility effects, they are less likely to eliminate contrastive accessibility effects. It was found that the occurrence of different types of contrast effects (comparison and correction) was not significantly affected by participants' accuracy motivations. Furthermore, it was found that the mechanisms instigated by accuracy motivations differ from those ignited by correction instructions: Accuracy motivations attenuate assimilation effects because perceivers add target interpretations to the one suggested by primed information. Conversely, it was found that correction instructions yield contrast and prompt respondents to remove the priming event's influence from their reaction to the target. PMID:9569650

  11. High accuracy FIONA-AFM hybrid imaging

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  12. Diagnostic accuracy of MRCP in choledocholithiasis

    Purpose: To evaluate the accuracy of MRCP in diagnosing choledocholithiasis considering Endoscopic Retrograde Cholangiopancreatography (ERCP) as the gold standard. To compare the results achieved during the first two years of use (1999-2000) of Magnetic Resonance Cholangiopancreatography (MRCP) in patients with suspected choledocholithiasis with those achieved during the following two years (2001-2002) in order to establish the repeatability and objectivity of MRCP results. Materials and methods: One hundred and seventy consecutive patients underwent MRCP followed by ERCP within 72 h. In 22/170 (13%) patients ERCP was unsuccessful for different reasons. MRCP was performed using a 1.5 T magnet with both multi-slice HASTE sequences and thick-slice projection technique. Choledocholithiasis was diagnosed in the presence of signal void images in the dependent portion of the duct surrounded by hyperintense bile and detected at least in two projections. The MRCP results, read independently from the ERCP results, were compared in two different and subsequent periods. Results: ERCP confirmed choledocholithiasis in 87 patients. In these cases the results of MRCP were the following: 78 true positives, 53 true negatives, 7 false positives, and 9 false negatives. The sensitivity, specificity and accuracy were 90%, 88% and 89%, respectively. After the exclusion of stones with diameters smaller than 6 mm, the sensitivity, specificity and accuracy were 100%, 99% and 99%, respectively. MRCP accuracy was related to the size of the stones. There was no significant statistical difference between the results obtained in the first two-year period and those obtained in the second period. Conclusions: MRCP i sufficiently accurate to replace ERCP in patients with suspected choledocholithiasis. The results are related to the size of stones. The use of well-defined radiological signs allows good diagnostic accuracy independent of the learning curve

  13. Level sensing system

    García Fernández, Javier

    2011-01-01

    This report is to present the design and development of a liquid level sensing system based on SCADA system. It is a synthesis of the actions carried out for the construction of the liquid level measuring system. It is structured in five chapters that explain the steps for the realisation of the project. Knowing the content level of liquid in tanks is very important in industry, although accuracy demands vary according to business requirements. Therefore, it becomes extremely important to mon...

  14. Radiometric and Geometric Accuracy Analysis of Rasat Pan Imagery

    Kocaman, S.; Yalcin, I.; Guler, M.

    2016-06-01

    RASAT is the second Turkish Earth Observation satellite which was launched in 2011. It operates with pushbroom principle and acquires panchromatic and MS images with 7.5 m and 15 m resolutions, respectively. The swath width of the sensor is 30 km. The main aim of this study is to analyse the radiometric and geometric quality of RASAT images. A systematic validation approach for the RASAT imagery and its products is being applied. RASAT image pair acquired over Kesan city in Edirne province of Turkey are used for the investigations. The raw RASAT data (L0) are processed by Turkish Space Agency (TUBITAK-UZAY) to produce higher level image products. The image products include radiometrically processed (L1), georeferenced (L2) and orthorectified (L3) data, as well as pansharpened images. The image quality assessments include visual inspections, noise, MTF and histogram analyses. The geometric accuracy assessment results are only preliminary and the assessment is performed using the raw images. The geometric accuracy potential is investigated using 3D ground control points extracted from road intersections, which were measured manually in stereo from aerial images with 20 cm resolution and accuracy. The initial results of the study, which were performed using one RASAT panchromatic image pair, are presented in this paper.

  15. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa

    2015-01-01

    The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy. PMID:26075014

  16. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    Ufuk Çelik

    2015-01-01

    Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  17. Global discriminative learning for higher-accuracy computational gene prediction.

    Axel Bernal

    2007-03-01

    Full Text Available Most ab initio gene predictors use a probabilistic sequence model, typically a hidden Markov model, to combine separately trained models of genomic signals and content. By combining separate models of relevant genomic features, such gene predictors can exploit small training sets and incomplete annotations, and can be trained fairly efficiently. However, that type of piecewise training does not optimize prediction accuracy and has difficulty in accounting for statistical dependencies among different parts of the gene model. With genomic information being created at an ever-increasing rate, it is worth investigating alternative approaches in which many different types of genomic evidence, with complex statistical dependencies, can be integrated by discriminative learning to maximize annotation accuracy. Among discriminative learning methods, large-margin classifiers have become prominent because of the success of support vector machines (SVM in many classification tasks. We describe CRAIG, a new program for ab initio gene prediction based on a conditional random field model with semi-Markov structure that is trained with an online large-margin algorithm related to multiclass SVMs. Our experiments on benchmark vertebrate datasets and on regions from the ENCODE project show significant improvements in prediction accuracy over published gene predictors that use intrinsic features only, particularly at the gene level and on genes with long introns.

  18. The Crustal Thickness of Mars: Accuracy and Resolution

    Smith, David E.; Zuber, Maria T.

    2002-01-01

    Since the arrival of the Mars Global Surveyor (MGS) spacecraft at Mars and its entry into its mapping orbit in February 1999, the radio tracking and altimetry data from the mission have been part of the systematic mapping of the planet and used to develop very precise models of the gravity field and topography of Mars. Until the altimetry function of Mars Orbiter Laser Altimeter (MOLA) failed on June 30, 2001, the instrument had acquired close to 700 million measurements of the planet's radius, the majority of which have been used to develop a model of the topography with horizontal resolution of about 500 m and radial accuracy of better than 1 m. Concurrently, Doppler and range tracking of MGS by the Deep Space Network at X-band frequencies, with accuracies of about 50 microns/s and about 5 m respectively, have provided orbital knowledge of MGS to the few meter level and enabled the gravity perturbations of the spacecraft to be used to develop a improved gravity models of Mars. The recent models have horizontal resolutions of about 200 km, or degree 65, when expressed in spherical harmonics, and have accuracies of the order of a few mGals at the poles and about 10 mGals at the equator at the highest resolution.

  19. CERN Shop: Christmas Sale, 10 & 16.12.2004

    2004-01-01

    Looking for Christmas present ideas? Come to the Reception Shop Special Stand in Meyrin, Main Building, ground floor, on Friday 10 and/or on Thursday 16 December from 10:30 to 16:00. CERN 50th Anniversary sweat-shirt (grey in M, L, XL) 30.- CERN 50th Anniversary T-shirt, (S, M, L, XL) 20.- CERN 50th Anniversary silk tie (2 colours) 30.- Einstein silk tie (blue, grey) 45.- Silk scarf 40.- Swiss army knife with CERN logo 25.- Swiss Duo-Pack with CERN logo 30.- CERN 50th Anniversary watch (2 models) 40.- CERN pens (2 models) 5.- Small Open Day souvenirs (a few different items) 2.- CERN 50th Anniversary Book (English & French) 70.- "Prestigious Discoveries" at CERN (English/anglais) 32.- "Particle Odyssey" soft cover (English/anglais) 35.- If you miss this special occasion, the articles are also available at the Reception Shop in Building 33 from Monday to Saturday between 08:30 and 17:00 hrs. Education and Communica...

  20. Positional Accuracy Assessment of Googleearth in Riyadh

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  1. Improving the accuracy of dynamic mass calculation

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  2. Improvement in Rayleigh Scattering Measurement Accuracy

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  3. Evaluating measurement accuracy a practical approach

    Rabinovich, Semyon G

    2013-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. From developing the theory of indirect measurements to proposing new methods of reduction, transformation, and enumeration, this work encompasses the full range of measurement data processing. It includes many examples that illustrate the application of general theory to typical problems encountered in measurement practice. As a result, the book serves as an inclusive reference work for data processing of all types of measurements: single and multiple, combined and simultaneous, direct (both linear and nonlinear), and indirect (both dependent and independent). It is a working tool for experimental scientists and engineers of all disciplines who work with instrumentation. It is also a good resource for natural science and engineering students and for technicians performing measurements in industry. A key feature of the book is...

  4. FNAC ACCURACY IN DIAGNOSIS OF BREAST LESIONS

    Venugopal; Pratap; Nikshita

    2014-01-01

    BACKGROUND: Malignancy of breast imposes significant reduction in life span. The prognosis of breast cancer is primarily dependent on the extent of disease and also early diagnosis in important. FNAC is a widely accepted cytological technique in the early diagnosis of palpable breast lesions. There have been many studies of accuracy of FNAC, which has been shown to be high in many centres. AIMS: To compare cytological and histopathological diagnosis of breast lesions and to ...

  5. Marginal accuracy of temporary composite crowns.

    Tjan, A H; Tjan, A H; Grant, B E

    1987-10-01

    An in vitro study was conducted to quantitatively compare the marginal adaptation of temporary crowns made from Protemp material with those made from Scutan, Provisional, and Trim materials. A direct technique was used to make temporary restorations on prepared teeth with an impression as a matrix. Protem, Trim, and Provisional materials produced temporary crowns of comparable accuracy. Crowns made from Scutan material had open margins. PMID:2959770

  6. On the accuracy of language trees.

    Simone Pompei

    Full Text Available Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve

  7. Do Investors Learn About Analyst Accuracy?

    Chang, Charles; Daouk, Hazem; Wang, Albert

    2008-01-01

    We study the impact of analyst forecasts on prices to determine whether investors learn about analyst accuracy. Our test market is the crude oil futures market. Prices rise when analysts forecast a decrease (increase) in crude supplies. In the 15 minutes following supply realizations, prices rise (fall) when forecasts have been too high (low). In both the initial price action relative to forecasts and in the subsequent reaction relative to realized forecast errors, the price response is stron...

  8. Earnings Forecast Accuracy And Career Concerns

    Roger, Tristan

    2015-01-01

    Previous studies show that analysts' compensation is not linked to earnings forecast accuracy. We evidence however that analysts have incentives to issue accurate forecasts. We show that brokerage houses reward their best forecasters by assigning them to large, mature firms. Covering such firms increases the potential for future compensation as these firms generate a great deal of investment banking and trading activities. The coverage of such firms also increases analysts' exposure to large ...

  9. The accuracy of portable peak flow meters.

    Miller, M. R.; Dickinson, S A; Hitchings, D J

    1992-01-01

    BACKGROUND: The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. METHODS: The response of several portable PEF meters was tested with absolute standards of flow ge...

  10. Limiting Accuracy of Inexact Saddle Point Solvers

    Rozložník, Miroslav; Jiránek, Pavel

    Dundee : University of Dundee, 2007 - (Griffith, D.; Watson , G.). s. 33-33 [Biennial Conference on Numerical Analysis /22./. 26.06.2007-29.06.2007, University of Dundee] R&D Projects: GA MŠk 1M0554; GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : saddle point systems * iterative methods * rounding error analysis * limiting accuracy

  11. Accuracy of radiocarbon analyses at ANTARES

    Lawson, E.M.; Fink, D.; Hotchkis, M.; Hua, Q.; Jacobsen, G.; Smith, A.M.; Tuniz, C. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1996-12-31

    Accuracy in Accelerator Mass Spectroscopy (AMS) measurements, as distinct from precision, requires the application of a number of corrections. Most of these are well known except in extreme circumstances and AMS can deliver radiocarbon results which are both precise and accurate in the 0.5 to 1.0% range. The corrections involved in obtaining final radiocarbon ages are discussed. 3 refs., 1 tab.

  12. FIELD ACCURACY TEST OF RPAS PHOTOGRAMMETRY

    Barry, P; Coakley, R.

    2013-01-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction i...

  13. Credit report accuracy and access to credit

    Avery, Robert B.; Paul S. Calem; Glenn B. Canner

    2004-01-01

    Data that credit-reporting agencies maintain on consumers' credit-related experiences play a central role in U.S. credit markets. Analysts widely agree that the data enable these markets to function more efficiently and at lower cost than would otherwise be possible. Despite the great benefits of the current system, however, some analysts have raised concerns about the accuracy, timeliness, completeness, and consistency of consumer credit records and about the effects of data problems on the ...

  14. Accuracy of stereolithographic models of human anatomy

    A study was undertaken to determine the dimensional accuracy of anatomical replicas derived from X-ray 3D computed tomography (CT) images and produced using the rapid prototyping technique of stereolithography (SLA). A dry bone skull and geometric phantom were scanned, and replicas were produced. Distance measurements were obtained to compare the original objects and the resulting replicas. Repeated measurements between anatomical landmarks were used for comparison of the original skull and replica. Results for the geometric phantom demonstrate a mean difference of +0.47mm, representing an accuracy of 97.7-99.12%. Measurements of the skull produced a range of absolute differences (maximum +4.62mm, minimum +0.1mm, mean +0.85mm). These results support the use of SLA models of human anatomical structures in such areas as pre-operative planning of complex surgical procedures. For applications where higher accuracy is required, improvements can be expected by utilizing smaller pixel resolution in the CT images. Stereolithographic models can now be confidently employed as accurate, three-dimensional replicas of complex, anatomical structures. 14 refs., 2 tabs., 8 figs

  15. Algorithms for improving accuracy of spray simulation

    ZHANG HuiYa; ZHANG YuSheng; XIAO HeLin; XU Bo

    2007-01-01

    Fuel spray is the pivotal process of direct injection engine combustion. The accuracy of spray simulation determines the reliability of combustion calculation. However, the traditional techniques of spray simulation in KIVA and commercial CFD codes are very susceptible to grid resolution. As a consequence, predicted engine performance and emission can depend on the computational mesh. The two main causes of this problem are the droplet collision algorithm and coupling between gas and liquid phases. In order to improve the accuracy of spray simulation, the original KIVA code is modified using the cross mesh droplet collision (CMC) algorithm and gas phase velocity interpolation algorithm. In the constant volume apparatus and D.I. Diesel engine, the improvements of the modified KIVA code in spray simulation accuracy are checked from spray structure, predicted average drop size and spray tip penetration, respectively. The results show a dramatic decrease in grid dependency. With these changes, the distorted phenomenon of spray structure is vanished. The uncertainty in predicted average drop size is reduced from 30 to 5 μm in constant volume apparatus calculation, and the uncertainty is further reduced to 2 μm in an engine simulation. The predicted spray tip penetrations in engine simulation also have better consistency in medium and fine meshes.

  16. HEIGHT ACCURACY BASED ON DIFFERENT RTK GPS METHOD FOR ULTRALIGHT AIRCRAFT IMAGES

    K. N. Tahar

    2015-01-01

    Height accuracy is one of the important elements in surveying work especially for control point’s establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar...

  17. Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors

    OLABI, Adel; Damak, Mohamed; BEAREE, Richard; Gibaru, Olivier; LELEU, Stéphane

    2012-01-01

    International audience The use of industrial robots in many fields of industry like prototyping, pre-machining and end milling is limited because of their poor accuracy. Robot joints are mainly responsible for this poor accuracy. The flexibility of robots joints and the kinematic errors in the transmission systems produce a significant error of position in the level of the end-effector. This paper presents these two types of joint errors. Identification methods are presented with experimen...

  18. The influence of subjective factors on the evaluation of singing voice accuracy

    Larrouy, Pauline; Morsomme, Dominique

    2013-01-01

    A previous study highlighted the objectivity of music experts when rating the vocal accuracy of sung performances (Larrouy-Maestri, Lévêque, Schön, Giovanni, & Morsomme, 2013). However, in an ecological context, numerous factors can influence the judges’ assessment of a music performance. This preliminary study aims to examine the effect of the music level of the performers on the evaluation of singing voice accuracy and to explore subjective factors which could influence the assessment. T...

  19. Visual Inspection Displays Good Accuracy for Detecting Caries Lesions

    Twetman, Svante

    2015-01-01

    ARTICLE TITLE AND BIBLIOGRAPHIC INFORMATION: Visual inspection for caries detection: a systematic review and meta-analysis. Gimenez T, Piovesan C, Braga MM, Raggio DP, Deery C, Ricketts DN, Ekstrand DR, Mendes FM. J Dent Res 2015;94(7):895-904. REVIEWER: Svante Twetman, DDS, PhD, Odont Dr PURPOSE...... RECOMMENDATION GRADE: Grade A: Consistent, good-quality patient-oriented evidence......./QUESTION: To evaluate the overall accuracy of visual methods for detecting caries lesions. SOURCE OF FUNDING: Brazilian government (Process 2012/17888-1). TYPE OF STUDY/DESIGN: Systematic review with meta-analysis of data LEVEL OF EVIDENCE: Level 1: Good-quality, patient-oriented evidence STRENGTH OF...

  20. Classification accuracy across multiple tests following item method directed forgetting.

    Goernert, Phillip N; Widner, Robert L; Otani, Hajime

    2007-09-01

    We investigated recall of line-drawing pictures paired at study with an instruction either to remember (TBR items) or to forget (TBF items). Across three 7-minute tests, net recall (items reported independent of accuracy in instructional designation) and correctly classified recall (recall conditional on correct instructional designation) showed directed forgetting. That is, for both measures, recall of TBR items always exceeded recall of TBF items. Net recall for both item types increased across tests at comparable levels showing hypermnesia. However, across tests, correct classification of both item types decreased at comparable levels. Collectively, hypermnesia as measured by net recall is possible for items from multiple sets, but at the cost of accurate source information. PMID:17676551

  1. Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska

    Selkowitz, D.J.; Stehman, S.V.

    2011-01-01

    The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.

  2. Character Reading Fluency, Word Segmentation Accuracy, and Reading Comprehension in L2 Chinese

    Shen, Helen H.; Jiang, Xin

    2013-01-01

    This study investigated the relationships between lower-level processing and general reading comprehension among adult L2 (second-language) beginning learners of Chinese, in both target and non-target language learning environments. Lower-level processing in Chinese reading includes the factors of character-naming accuracy, character-naming speed,…

  3. Improving the accuracy of atomic emission determination of europium isotope ratio without reference samples

    A method has been developed of determining the isotopic composition of europium without standard reference samples. The estimation of the analysis accuracy has shown a statistically negligible systematic error. The confidence level of single determination (P0.95) is ± 0.3 at.% at 151Eu levels from 47.47 to 98.2 at.%

  4. [True color accuracy in digital forensic photography].

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation). PMID:27386623

  5. Accuracy of velocities from repeated GPS measurements

    Akarsu, V.; Sanli, D. U.; Arslan, E.

    2015-04-01

    Today repeated GPS measurements are still in use, because we cannot always employ GPS permanent stations due to a variety of limitations. One area of study that uses velocities/deformation rates from repeated GPS measurements is the monitoring of crustal motion. This paper discusses the quality of the velocities derived using repeated GPS measurements for the aim of monitoring crustal motion. From a global network of International GNSS Service (IGS) stations, we processed GPS measurements repeated monthly and annually spanning nearly 15 years and estimated GPS velocities for GPS baseline components latitude, longitude and ellipsoidal height. We used web-based GIPSY for the processing. Assuming true deformation rates can only be determined from the solutions of 24 h observation sessions, we evaluated the accuracy of the deformation rates from 8 and 12 h sessions. We used statistical hypothesis testing to assess the velocities derived from short observation sessions. In addition, as an alternative control method we checked the accuracy of GPS solutions from short observation sessions against those of 24 h sessions referring to statistical criteria that measure the accuracy of regression models. Results indicate that the velocities of the vertical component are completely affected when repeated GPS measurements are used. The results also reveal that only about 30% of the 8 h solutions and about 40% of 12 h solutions for the horizontal coordinates are acceptable for velocity estimation. The situation is much worse for the vertical component in which none of the solutions from campaign measurements are acceptable for obtaining reliable deformation rates.

  6. Improvement of focus accuracy on processed wafer

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  7. Accuracy of the river discharge measurement

    Chung Yang, Han

    2013-04-01

    Discharge values recorded for water conservancy and hydrological analysis is a very important work. Flood control projects, watershed remediation and river environmental planning projects quite need the discharge measurement data. In Taiwan, we have 129 rivers, in accordance with the watershed situation, economic development and other factors, divided into 24 major rivers, 29 minor rivers and 79 ordinary rivers. If each river needs to measure and record these discharge values, it will be enormous work. In addition, the characteristics of Taiwan's rivers contain steep slope, flow rapidly and sediment concentration higher, so it really encounters some difficulties in high flow measurement. When the flood hazards come, to seek a solution for reducing the time, manpower and material resources in river discharge measurement is very important. In this study, the river discharge measurement accuracy is used to determine the tolerance percentage to reduce the number of vertical velocity measurements, thereby reducing the time, manpower and material resources in the river discharge measurement. The velocity data sources used in this study form Yang (1998). Yang (1998) used the Fiber-optic Laser Doppler Velocimetery (FLDV) to obtain different velocity data under different experimental conditions. In this study, we use these data to calculate the mean velocity of each vertical line by three different velocity profile formula (that is, the law of the wall, Chiu's theory, Hu's theory), and then multiplied by each sub-area to obtain the discharge measurement values and compared with the true values (obtained by the direct integration mode) to obtain the accuracy of discharge. The research results show that the discharge measurement values obtained by Chiu's theory are closer to the true value, while the maximum error is the law of the wall. The main reason is that the law of the wall can't describe the maximum velocity occurred in underwater. In addition, the results also show

  8. Accuracy of Ultrasonography in Diagnosing Acute Appendicitis

    Parisa Javidi Parsijani; Nima Pourhabibi Zarandi; Shahram Paydar; Hamidreza Abbasi; Shahram Bolandparvaz

    2013-01-01

    Objectives: To evaluate the accuracy of sonography in diagnosing acute appendicitis in patients with Alvarado score 4–7.Methods: This is a retrospective cross-sectional study being performed in Namazee hospital affiliated with Shiraz University of Medical sciences during a one year period from 9/2007 to 9/2008. We evaluated all patients with Alvarado score 4-7 and divided them in two groups: those with Ultrasound study prior to surgery and those without any imaging modalities for diagnosis of...

  9. Proper motion accuracy of WFPDF stars

    Chapanov, Y.; Vondrák, Jan; Ron, Cyril; Štefka, Vojtěch

    Beograd : Astronomical Society "Rudjer Boškovič", 2012 - (Tsvetkov, M.; Dimitrijevič, M.; Tsvetkova, K.; Kounchev, O.; Mijajlovič, Ž.), s. 169-176 ISBN 9788689035018. [Bulgarian-Serbian Astronomical Conference /7./. Chepelare (BG), 01.06.2010-04.06.2010] R&D Projects: GA MŠk(CZ) LC506 Institutional research plan: CEZ:AV0Z10030501 Keywords : proper motions * accuracy Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics http://wfpdb.org/ftp/7_BSAC/pdfs/c06.pdf

  10. On the Accuracy of IGS Orbits

    Griffiths, J.; Ray, J.

    2007-12-01

    In order to explore the reliability of IGS internal orbit accuracy estimates, we have compared the geocentric satellite positions at the midnight epoch between consecutive days for the period since November 5, 2006, when the IGS changed its method of antenna calibration. For each pair of orbits, day "A" has been fitted to the extended CODE orbit model (three position and three velocity parameters plus nine nuisance solar radiation parameters), using the IGS05 Final orbits as psuedo-observations, and extrapolated to epoch 24:00 to compare with the 00:00 epoch from the IGS05 Final orbits of day "B". This yields a time series of orbit repeatability measures, analogous to the classical geodetic test for position determinations. To assess the error introduced by the fitting and extrapolation process, the same procedure has been applied to several days dropping the 23:45 epoch, fitting up to 23:30, extrapolating to 23:45, and comparing with reported positions for 23:45. The test differences range between 0 and 10 mm (mean = 3 mm) per geocentric component with 3D differences of 3 to 10 mm (mean = 6 mm). So, the effect of the orbit fitting-extrapolation process nearly always adds insignificant noise to the day- boundary orbit comparisons. If we compare our average 1D position differences to the official IGS accuracy codes (derived from the internal agreement among combined orbit solutions), root-sum-squared for each pair of days, the actual discontinuities are not well correlated with the expected performance values. If instead the IGS RMS values from the Final combination long-arc analyses (which also use the extended CODE model) are taken as the measure of IGS accuracy, the actual orbit discontinuties are much better represented. This is despite the fact that our day- boundary offsets apply to a single epoch each day and the long-arc analyses consider variations over a day (compared to the satellite dynamics determined over the full week). Our method is not well suited