WorldWideScience

Sample records for linearity specificity accuracy

  1. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1989-10-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  2. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  3. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  4. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  5. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  6. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  7. Bounds on achievable accuracy in analog optical linear-algebra processors

    Science.gov (United States)

    Batsell, Stephen G.; Walkup, John F.; Krile, Thomas F.

    1990-07-01

    Upper arid lower bounds on the number of bits of accuracy achievable are determined by applying a seconth-ortler statistical model to the linear algebra processor. The use of bounds was found necessary due to the strong signal-dependence of the noise at the output of the optical linear algebra processor (OLAP). 1 1. ACCURACY BOUNDS One of the limiting factors in applying OLAPs to real world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication ard addition operations spatial variations across arrays and crosstalk. We have previously examined these noise sources and determined a general model for the output noise mean and variance. The model demonstrates a strony signaldependency in the noise at the output of the processor which has been confirmed by our experiments. 1 We define accuracy similar to its definition for an analog signal input to an analog-to-digital (ND) converter. The number of bits of accuracy achievable is related to the log (base 2) of the number of separable levels at the P/D converter output. The number of separable levels is fouri by dividing the dynamic range by m times the standard deviation of the signal a. 2 Here m determines the error rate in the P/D conversion. The dynamic range can be expressed as the

  8. Travel-time source-specific station correction improves location accuracy

    Science.gov (United States)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  9. Accuracy of panoramic radiography and linear tomography in mandibular canal localization

    Directory of Open Access Journals (Sweden)

    Bashizadeh Fakhar H.

    2008-10-01

    Full Text Available "nBackground and Aim: Accurate bone measurements are essential to determine the optimal size and length of dental implants. The magnification factor of radiographic images may vary with the imaging technique used. The purpose of this study was to compare the accuracy of linear tomography and panoramic radiography in vertical measurements, as well as the accuracy of linear tomography in mandibular width estimation. "nMaterials and Methods: In this test evaluation study, the vertical distances between the crest and the superior border of the inferior alveolar canal, marked with a metal ball, was measured by linear tomography and panoramic radiography in 23 sites of four dry mandible bones. Also the mandibular width was measured at the same sites. Then, the bones were sectioned through the marked spots and the radiographic measurements were compared with actual values. "nResults: The vertical magnification factor in tomograms and panoramic radiographs was 1.79 (SD=0.17 and 1.69 (SD=0.23, respectively. The horizontal magnification of tomograms was 1.47 (SD=0.17. A significant correlation was found between the linear tomographic and actual values, regarding vertical dimensions (p<0.001, r=0.968 and width (p<0.001, r=0.813. The correlation was significant but lower in panoramic radiographs (p<0.001, r=0.795. Applying the magnification values suggested by the manufacturer, the mean difference of vertical measurements between the tomographic sections was 2.5 mm (SD=3.4 but 3.8 mm (SD=1.65 in panoramic radiographs. The mean of absolute difference in mandibular width between the tomographic sections and reality was 0.3mm (SD=1.13. In the linear tomograms, 4.3% of vertical and 56.5% of the width measurements were in the ±1mm error limit. Only 4.3% of the vertical measurements were within this range in the panthomographs. The linear regression equation between the actual values and those obtained by radiography in vertical dimensions showed that 87.5% of

  10. Improvement on the accuracy of beam bugs in linear induction accelerator

    International Nuclear Information System (INIS)

    Xie Yutong; Dai Zhiyong; Han Qing

    2002-01-01

    In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree

  11. Diagnostic accuracy of full-body linear X-ray scanning in multiple trauma patients in comparison to computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Joeres, A.P.W.; Heverhagen, J.T.; Bonel, H. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Exadaktylos, A. [Inselspital - University Hospital Bern (Switzerland). Dept. of Emergency Medicine; Klink, T. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Wuerzburg Univ. (Germany). Inst. of Diagnostic and Interventional Radiology

    2016-02-15

    The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2%, the specificity was 93.3%, the positive predictive value was 91%, and the negative predictive value was 57.5%. The overall sensitivity for vertebral fractures was 16.7%, and the specificity was 100%. The sensitivity was 48.7% and the specificity 98.2% for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS.40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT.

  12. The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.

    Science.gov (United States)

    Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y

    2016-04-01

    To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.

  13. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Directory of Open Access Journals (Sweden)

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  14. Rf system specifications for a linear accelerator

    International Nuclear Information System (INIS)

    Young, A.; Eaton, L.E.

    1992-01-01

    A linear accelerator contains many systems; however, the most complex and costly is the RF system. The goal of an RF system is usually simply stated as maintaining the phase and amplitude of the RF signal within a given tolerance to accelerate the charged particle beam. An RF system that drives a linear accelerator needs a complete system specification, which should contain specifications for all the subsystems (i.e., high-power RF, low-level RF, RF generation/distribution, and automation control). This paper defines a format for the specifications of these subsystems and discusses each RF subsystem independently to provide a comprehensive understanding of the function of each subsystem. This paper concludes with an example of a specification spreadsheet allowing one to input the specifications of a subsystem. Thus, some fundamental parameters (i.e., the cost and size) of the RF system can be determined

  15. Prostate-specific antigen testing accuracy in community practice

    Directory of Open Access Journals (Sweden)

    Adams-Cameron Meg

    2002-10-01

    Full Text Available Abstract Background Most data on prostate-specific antigen (PSA testing come from urologic cohorts comprised of volunteers for screening programs. We evaluated the diagnostic accuracy of PSA testing for detecting prostate cancer in community practice. Methods PSA testing results were compared with a reference standard of prostate biopsy. Subjects were 2,620 men 40 years and older undergoing (PSA testing and biopsy from 1/1/95 through 12/31/98 in the Albuquerque, New Mexico metropolitan area. Diagnostic measures included the area under the receiver-operating characteristic curve, sensitivity, specificity, and likelihood ratios. Results Cancer was detected in 930 subjects (35%. The area under the ROC curve was 0.67 and the PSA cutpoint of 4 ng/ml had a sensitivity of 86% and a specificity of 33%. The likelihood ratio for a positive test (LR+ was 1.28 and 0.42 for a negative test (LR-. PSA testing was most sensitive (90% but least specific (27% in older men. Age-specific reference ranges improved specificity in older men (49% but decreased sensitivity (70%, with an LR+ of 1.38. Lowering the PSA cutpoint to 2 ng/ml resulted in a sensitivity of 95%, a specificity of 20%, and an LR+ of 1.19. Conclusions PSA testing had fair discriminating power for detecting prostate cancer in community practice. The PSA cutpoint of 4 ng/ml was sensitive but relatively non-specific and associated likelihood ratios only moderately revised probabilities for cancer. Using age-specific reference ranges and a PSA cutpoint below 4 ng/ml improved test specificity and sensitivity, respectively, but did not improve the overall accuracy of PSA testing.

  16. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  17. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material

    OpenAIRE

    Manoj, Smita Sara; Cherian, K. P.; Chitre, Vidya; Aras, Meena

    2013-01-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregu...

  18. Accuracy of linear measurement using cone-beam computed tomography at different reconstruction angles

    International Nuclear Information System (INIS)

    Nikneshan, Nikneshan; Aval, Shadi Hamidi; Bakhshalian, Neema; Shahab, Shahriyar; Mohammadpour, Mahdis; SarikhanI, Soodeh

    2014-01-01

    This study was performed to evaluate the effect of changing the orientation of a reconstructed image on the accuracy of linear measurements using cone-beam computed tomography (CBCT). Forty-two titanium pins were inserted in seven dry sheep mandibles. The length of these pins was measured using a digital caliper with readability of 0.01 mm. Mandibles were radiographed using a CBCT device. When the CBCT images were reconstructed, the orientation of slices was adjusted to parallel (i.e., 0 degrees), +10 degrees, +12 degrees, -12 degrees, and -10 degrees with respect to the occlusal plane. The length of the pins was measured by three radiologists, and the accuracy of these measurements was reported using descriptive statistics and one-way analysis of variance (ANOVA); p<0.05 was considered statistically significant. The differences in radiographic measurements ranged from -0.64 to +0.06 at the orientation of -12 degrees, -0.66 to -0.11 at -10 degrees, -0.51 to +0.19 at 0 degrees, -0.64 to +0.08 at +10 degrees, and -0.64 to +0.1 at +12 degrees. The mean absolute values of the errors were greater at negative orientations than at the parallel position or at positive orientations. The observers underestimated most of the variables by 0.5-0.1 mm (83.6%). In the second set of observations, the reproducibility at all orientations was greater than 0.9. Changing the slice orientation in the range of -12 degrees to +12 degrees reduced the accuracy of linear measurements obtained using CBCT. However, the error value was smaller than 0.5 mm and was, therefore, clinically acceptable.

  19. Accuracy and Reliability of Cone-Beam Computed Tomography for Linear and Volumetric Mandibular Condyle Measurements. A Human Cadaver Study.

    Science.gov (United States)

    García-Sanz, Verónica; Bellot-Arcís, Carlos; Hernández, Virginia; Serrano-Sánchez, Pedro; Guarinos, Juan; Paredes-Gallardo, Vanessa

    2017-09-20

    The accuracy of Cone-Beam Computed Tomography (CBCT) on linear and volumetric measurements on condyles has only been assessed on dry skulls. The aim of this study was to evaluate the reliability and accuracy of linear and volumetric measurements of mandibular condyles in the presence of soft tissues using CBCT. Six embalmed cadaver heads were used. CBCT scans were taken, followed by the extraction of the condyles. The water displacement technique was used to calculate the volumes of the condyles and three linear measurements were made using a digital caliper, these measurements serving as the gold standard. Surface models of the condyles were obtained using a 3D scanner, and superimposed onto the CBCT images. Condyles were isolated on the CBCT render volume using the surface models as reference and volumes were measured. Linear measurements were made on CBCT slices. The CBCT method was found to be reliable for both volumetric and linear measurements (CV  0.90). Highly accurate values were obtained for the three linear measurements and volume. CBCT is a reliable and accurate method for taking volumetric and linear measurements on mandibular condyles in the presence of soft tissue, and so a valid tool for clinical diagnosis.

  20. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Science.gov (United States)

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  1. A Linearized Relaxing Algorithm for the Specific Nonlinear Optimization Problem

    Directory of Open Access Journals (Sweden)

    Mio Horai

    2016-01-01

    Full Text Available We propose a new method for the specific nonlinear and nonconvex global optimization problem by using a linear relaxation technique. To simplify the specific nonlinear and nonconvex optimization problem, we transform the problem to the lower linear relaxation form, and we solve the linear relaxation optimization problem by the Branch and Bound Algorithm. Under some reasonable assumptions, the global convergence of the algorithm is certified for the problem. Numerical results show that this method is more efficient than the previous methods.

  2. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression

    Directory of Open Access Journals (Sweden)

    Lemieux Sébastien

    2006-08-01

    Full Text Available Abstract Background The identification of differentially expressed genes (DEGs from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. Results On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. Conclusion The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  3. Accuracy of Linear Measurements in Stitched Versus Non-Stitched Cone Beam Computed Tomography Images

    International Nuclear Information System (INIS)

    Srimawong, P.; Krisanachinda, A.; Chindasombatjaroen, J.

    2012-01-01

    Cone beam computed tomography images are useful in clinical dentistry. Linear measurements are necessary for accurate treatment planning.Therefore, the accuracy of linear measurements on CBCT images is needed to be verified. Current program called stitching program in Kodak 9000C 3D systems automatically combines up to three localized volumes to construct larger images with small voxel size.The purpose of this study was to assess the accuracy of linear measurements from stitched and non-stitched CBCT images in comparison to direct measurements.This study was performed in 10 human dry mandibles. Gutta-percha rods were marked at reference points to obtain 10 vertical and horizontal distances. Direct measurements by digital caliper were served as gold standard. All distances on CBCT images obtained by using and not using stitching program were measured, and compared with direct measurements.The intraclass correlation coefficients (ICC) were calculated.The ICC of direct measurements were 0.998 to 1.000.The ICC of intraobserver of both non-stitched CBCT images and stitched CBCT images were 1.000 indicated strong agreement made by a single observer.The intermethod ICC between direct measurements vs non-stitched CBCT images and direct measurements vs stitched CBCT images ranged from 0.972 to 1.000 and 0.967 to 0.998, respectively. No statistically significant differences between direct measurements and stitched CBCT images or non-stitched CBCT images (P > 0.05). The results showed that linear measurements on non-stitched and stitched CBCT images were highly accurate with no statistical difference compared to direct measurements. The ICC values in non-stitched and stitched CBCT images and direct measurements of vertical distances were slightly higher than those of horizontal distances. This indicated that the measurements in vertical orientation were more accurate than those in horizontal orientation. However, the differences were not statistically significant. Stitching

  4. Intra- and inter-observer variability and accuracy in the determination of linear and angular measurements in computed tomography

    International Nuclear Information System (INIS)

    Christiansen, E.L.; Thompson, J.R.; Kopp, S.

    1986-01-01

    The observer variability and accuracy of linear and angular computed tomography (CT) software measurements in the transaxial plane were investigated for the temporomandibular joint with the General Electric 8800 CT/N Scanner. A dried and measured human mandible was embedded in plastic and scanned in vitro. Sixteen observers participated in the study. The following measurements were tested: inter- and extra-condylar distances, transverse condylar dimension, condylar angulation, and the plastic base of the specimen. Three frozen cadaveric heads were similarly scanned and measured in situ. Intra- and inter-observer variabilities were lowest for the specimen base and highest for condylar angulation. Neuroradiologists had the lowest variability as a group, and the radiology residents and paramedical personell had the highest, but the differences were small. No significant difference was found between CT and macroscopic measurement of the mandible. In situ measurement by CT of condyles with structural changes in the transaxial plane was, however, subject to substantial error. It was concluded that transaxial linear measurements of the condylar processes free of significant structural changes had an error and an accuracy well within acceptable limits. The error for angular measurements was significantly greater than the error for linear measurements

  5. Cross-beam energy transfer: On the accuracy of linear stationary models in the linear kinetic regime

    Science.gov (United States)

    Debayle, A.; Masson-Laborde, P.-E.; Ruyer, C.; Casanova, M.; Loiseau, P.

    2018-05-01

    We present an extensive numerical study by means of particle-in-cell simulations of the energy transfer that occurs during the crossing of two laser beams. In the linear regime, when ions are not trapped in the potential well induced by the laser interference pattern, a very good agreement is obtained with a simple linear stationary model, provided the laser intensity is sufficiently smooth. These comparisons include different plasma compositions to cover the strong and weak Landau damping regimes as well as the multispecies case. The correct evaluation of the linear Landau damping at the phase velocity imposed by the laser interference pattern is essential to estimate the energy transfer rate between the laser beams, once the stationary regime is reached. The transient evolution obtained in kinetic simulations is also analysed by means of a full analytical formula that includes 3D beam energy exchange coupled with the ion acoustic wave response. Specific attention is paid to the energy transfer when the laser presents small-scale inhomogeneities. In particular, the energy transfer is reduced when the laser inhomogeneities are comparable with the Landau damping characteristic length of the ion acoustic wave.

  6. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  7. Analysis on Accuracy of Bias, Linearity and Stability of Measurement System in Ball screw Processes by Simulation

    Directory of Open Access Journals (Sweden)

    Fan-Yun Pai

    2015-11-01

    Full Text Available To consistently produce high quality products, a quality management system, such as the ISO9001, 2000 or TS 16949 must be practically implemented. One core instrument of the TS16949 MSA (Measurement System Analysis is to rank the capability of a measurement system and ensure the quality characteristics of the product would likely be transformed through the whole manufacturing process. It is important to reduce the risk of Type I errors (acceptable goods are misjudged as defective parts and Type II errors (defective parts are misjudged as good parts. An ideal measuring system would have the statistical characteristic of zero error, but such a system could hardly exist. Hence, to maintain better control of the variance that might occur in the manufacturing process, MSA is necessary for better quality control. Ball screws, which are a key component in precision machines, have significant attributes with respect to positioning and transmitting. Failures of lead accuracy and axial-gap of a ball screw can cause negative and expensive effects in machine positioning accuracy. Consequently, a functional measurement system can incur great savings by detecting Type I and Type II errors. If the measurement system fails with respect to specification of the product, it will likely misjudge Type I and Type II errors. Inspectors normally follow the MSA regulations for accuracy measurement, but the choice of measuring system does not merely depend on some simple indices. In this paper, we examine the stability of a measuring system by using a Monte Carlo simulation to establish bias, linearity variance of the normal distribution, and the probability density function. Further, we forecast the possible area distribution in the real case. After the simulation, the measurement capability will be improved, which helps the user classify the measurement system and establish measurement regulations for better performance and monitoring of the precision of the ball screw.

  8. Best linear unbiased prediction of genomic breeding values using a trait-specific marker-derived relationship matrix.

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2010-09-01

    Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.

  9. Evaluation of the accuracy of panoramic radiography in linear measurements of the jaws

    International Nuclear Information System (INIS)

    Hoseini, S. H.; Bagherpour, A.; Javadian Langaroodi, A.; Ahmadian Yazdi, A.; Safaei, A.

    2011-01-01

    Panoramic radiography has a great place among imaging techniques because of its enormous advantages. One of the characteristics of an ideal imaging technique is to supply precise measurement. The purpose of the current study was to evaluate the accuracy of linear measurements of the jaws on panoramic radiographs. Patients and Methods: In this study, the vertical distances between the metal markers were measured by panoramic radiography in seven sites of two skulls in various head positions. Then the radiographic measurements were compared with the actual values. Results: Eighty three percent of the measurements were underestimated, 8.5% were overestimated on panoramic radiography and 8.5% of the measurements had no difference with the real measurements. Overestimation was not greater than 1 mm. The difference between actual and radiographic measurements was less in the posterior areas and in the mandible . In all head positions, the greatest difference between actual and radiographic measurements occurred in the anterior area. Conclusion: Based on the results of this study, linear measurements on panoramic radiography are more reliable in the posterior areas and may be used in early clinical measurements.

  10. ACCURACY IN FOOTBALL: SCORING A GOAL AS THE ULTIMATE OBJECTIVE OF FOOTBALL GAME

    Directory of Open Access Journals (Sweden)

    Dejan Milenković

    2013-12-01

    Full Text Available The study included 60 young football players aged 10 and 11 years with the aim to examine the influence of motor skills on a specific accuracy in football. The following tests for assessment of motoric abilities were used: Coordination: jumping over the horizontal rope, envelope test, figure „8“ with bending; Flexibility: forward bend-standing upper-body rotation-touch, the splits, side-stepping with a baton; Balance: standing on one leg along the balance bench, standing on one leg with eyes closed, flamingo test. Tests for specific accuracy in football included: elevational accuracy by foot - vertical target, elevational accuracy by foot - horizontal target, linear accuracy by foot - vertical target, the hits of ball by foot on the wall after it rebounces from the surface, elevational accuracy by head - vertical target and elevational accuracy by head - horizontal target. Results obtained by processing the data applying canonical correlation and regression analysis indicated the indisputable influence of motor abilities of young football players on the majority of specific accuracy tests.

  11. Accuracy and reliability of linear cephalometric measurements from cone-beam computed tomography scans of a dry human skull.

    Science.gov (United States)

    Berco, Mauricio; Rigali, Paul H; Miner, R Matthew; DeLuca, Stephelynn; Anderson, Nina K; Will, Leslie A

    2009-07-01

    The purpose of this study was to determine the accuracy and reliability of 3-dimensional craniofacial measurements obtained from cone-beam computed tomography (CBCT) scans of a dry human skull. Seventeen landmarks were identified on the skull. CBCT scans were then obtained, with 2 skull orientations during scanning. Twenty-nine interlandmark linear measurements were made directly on the skull and compared with the same measurements made on the CBCT scans. All measurements were made by 2 operators on 4 separate occasions. The method errors were 0.19, 0.21, and 0.19 mm in the x-, y- and z-axes, respectively. Repeated measures analysis of variance (ANOVA) showed no significant intraoperator or interoperator differences. The mean measurement error was -0.01 mm (SD, 0.129 mm). Five measurement errors were found to be statistically significantly different; however, all measurement errors were below the known voxel size and clinically insignificant. No differences were found in the measurements from the 2 CBCT scan orientations of the skull. CBCT allows for clinically accurate and reliable 3-dimensional linear measurements of the craniofacial complex. Moreover, skull orientation during CBCT scanning does not affect the accuracy or the reliability of these measurements.

  12. The linear interplay of intrinsic and extrinsic noises ensures a high accuracy of cell fate selection in budding yeast

    Science.gov (United States)

    Li, Yongkai; Yi, Ming; Zou, Xiufen

    2014-01-01

    To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292

  13. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    NARCIS (Netherlands)

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  14. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    Science.gov (United States)

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  15. THE CONTRIBUTION OF COMPLEXITY, ACCURACY AND FLUENCY TO LANGUAGE FOR SPECIFIC PURPOSES

    Directory of Open Access Journals (Sweden)

    Anthony Rausch

    2016-03-01

    Full Text Available This paper will outline an instructional approach that proposes a Complexity, Accuracy, Fluency (CAF paradigm as a means of providing learners with the CAF-based communication consciousness and CAF-oriented manipulative skills that are increasingly important in language use in Language for Specific Purposes. Given the complex combinations of communicative tasks, communicative formats and communicative circumstances that accompany the wide-ranging and various contexts of contemporary professional communication, communicative competence demands a combinative consciousness and informed application of Complexity, Accuracy and Fluency as a communication paradigm. Viewed as a combination of its three components, a CAF paradigm constitutes a fundamental ‘information, language and communication’ triad that can guide professional language use in any communicative circumstance. Viewed as a communicative skill set, the CAF triad implies the capability to adjust specific elements and aspects of information, language and communication as needed for a communicative task, whether in oral or print communication and regardless of task category. Adjusting complexity in this context refers to both content and language complexity. Adjusting accuracy refers to the conventions that dictate appropriate or acceptable language in a given context. Finally, adjusting fluency refers to a sense of communicative fluency, that which yields either smooth and persuasive language as in a native-speaker normative view or explicit and clearly explanatory language as necessary in some communicative encounters. The need to manipulate these three components depends on circumstance variables such as objective, available time, audience characteristics and the degree of detail desired. This paper will outline this combinative CAF notion as background to a materials development project being undertaken in a Japanese university, introducing the specifics of an Extended Reading Aloud

  16. Research of Face Recognition with Fisher Linear Discriminant

    Science.gov (United States)

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  17. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Science.gov (United States)

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  18. Sonography of the chest using linear-array versus sector transducers: Correlation with auscultation, chest radiography, and computed tomography.

    Science.gov (United States)

    Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli

    2016-07-08

    The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.

  19. Linear sign in cystic brain lesions ≥5 mm. A suggestive feature of perivascular space

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Jinkyeong [The Catholic University of Korea, Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, Seoul (Korea, Republic of); The Catholic University of Korea, Department of Radiology, St. Vincent' s Hospital, College of Medicine, Seoul (Korea, Republic of); Jang, Jinhee; Choi, Hyun Seok; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo [The Catholic University of Korea, Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, Seoul (Korea, Republic of)

    2017-11-15

    To determine the prevalence of a linear sign within enlarged perivascular space (EPVS) and chronic lacunar infarction (CLI) ≥ 5 mm on T2-weighted imaging (T2WI) and time-of-flight (TOF) magnetic resonance angiography (MRA), and to evaluate the diagnostic value of the linear signs for EPVS over CLI. This study included 101 patients with cystic lesions ≥ 5 mm on brain MRI including TOF MRA. After classification of cystic lesions into EPVS or CLI, two readers assessed linear signs on T2WI and TOF MRA. We compared the prevalence and the diagnostic performance of linear signs. Among 46 EPVS and 51 CLI, 84 lesions (86.6%) were in basal ganglia. The prevalence of T2 and TOF linear signs was significantly higher in the EPVS than in the CLI (P <.001). For the diagnosis of EPVS, T2 and TOF linear signs showed high sensitivity (> 80%). TOF linear sign showed significantly higher specificity (100%) and accuracy (92.8% and 90.7%) than T2 linear sign (P <.001). T2 and TOF linear signs were more frequently observed in EPVS than CLI. They showed high sensitivity in differentiation of them, especially for basal ganglia. TOF sign showed higher specificity and accuracy than T2 sign. (orig.)

  20. Synthesizing Configurable Biochemical Implementation of Linear Systems from Their Transfer Function Specifications.

    Directory of Open Access Journals (Sweden)

    Tai-Yin Chiu

    Full Text Available The ability to engineer synthetic systems in the biochemical context is constantly being improved and has a profound societal impact. Linear system design is one of the most pervasive methods applied in control tasks, and its biochemical realization has been proposed by Oishi and Klavins and advanced further in recent years. However, several technical issues remain unsolved. Specifically, the design process is not fully automated from specification at the transfer function level, systems once designed often lack dynamic adaptivity to environmental changes, matching rate constants of reactions is not always possible, and implementation may be approximative and greatly deviate from the specifications. Building upon the work of Oishi and Klavins, this paper overcomes these issues by introducing a design flow that transforms a transfer-function specification of a linear system into a set of chemical reactions, whose input-output response precisely conforms to the specification. This system is implementable using the DNA strand displacement technique. The underlying configurability is embedded into primitive components and template modules, and thus the entire system is adaptive. Simulation of DNA strand displacement implementation confirmed the feasibility and superiority of the proposed synthesis flow.

  1. Interlaboratory diagnostic accuracy of a Salmonella specific PCR-based method

    DEFF Research Database (Denmark)

    Malorny, B.; Hoorfar, Jeffrey; Hugas, M.

    2003-01-01

    A collaborative study involving four European laboratories was conducted to investigate the diagnostic accuracy of a Salmonella specific PCR-based method, which was evaluated within the European FOOD-PCR project (http://www.pcr.dk). Each laboratory analysed by the PCR a set of independent obtained...... presumably naturally contaminated samples and compared the results with the microbiological culture method. The PCR-based method comprised a preenrichment step in buffered peptone water followed by a thermal cell lysis using a closed tube resin-based method. Artificially contaminated minced beef and whole......-based diagnostic methods and is currently proposed as international standard document....

  2. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    Science.gov (United States)

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  3. Accuracy evaluation of the optical surface monitoring system on EDGE linear accelerator in a phantom study.

    Science.gov (United States)

    Mancosu, Pietro; Fogliata, Antonella; Stravato, Antonella; Tomatis, Stefano; Cozzi, Luca; Scorsetti, Marta

    2016-01-01

    Frameless stereotactic radiosurgery (SRS) requires dedicated systems to monitor the patient position during the treatment to avoid target underdosage due to involuntary shift. The optical surface monitoring system (OSMS) is here evaluated in a phantom-based study. The new EDGE linear accelerator from Varian (Varian, Palo Alto, CA) integrates, for cranial lesions, the common cone beam computed tomography (CBCT) and kV-MV portal images to the optical surface monitoring system (OSMS), a device able to detect real-time patient׳s face movements in all 6 couch axes (vertical, longitudinal, lateral, rotation along the vertical axis, pitch, and roll). We have evaluated the OSMS imaging capability in checking the phantoms׳ position and monitoring its motion. With this aim, a home-made cranial phantom was developed to evaluate the OSMS accuracy in 4 different experiments: (1) comparison with CBCT in isocenter location, (2) capability to recognize predefined shifts up to 2° or 3cm, (3) evaluation at different couch angles, (4) ability to properly reconstruct the surface when the linac gantry visually block one of the cameras. The OSMS system showed, with a phantom, to be accurate for positioning in respect to the CBCT imaging system with differences of 0.6 ± 0.3mm for linear vector displacement, with a maximum rotational inaccuracy of 0.3°. OSMS presented an accuracy of 0.3mm for displacement up to 1cm and 1°, and 0.5mm for larger displacements. Different couch angles (45° and 90°) induced a mean vector uncertainty < 0.4mm. Coverage of 1 camera produced an uncertainty < 0.5mm. Translations and rotations of a phantom can be accurately detect with the optical surface detector system. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  4. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  5. Linear accuracy and reliability of volume data sets acquired by two CBCT-devices and an MSCT using virtual models : A comparative in-vitro study

    NARCIS (Netherlands)

    Wikner, Johannes; Hanken, Henning; Eulenburg, Christine; Heiland, Max; Groebe, Alexander; Assaf, Alexandre Thomas; Riecke, Bjoern; Friedrich, Reinhard E.

    2016-01-01

    Objective. To discriminate clinically relevant aberrance, the accuracy of linear measurements in three-dimensional (3D) reconstructed datasets was investigated. Materials and methods. Three partly edentulous human skulls were examined. Landmarks were defined prior to acquisition. Two CBCT-scanners

  6. Linear sign in cystic brain lesions ≥5 mm: A suggestive feature of perivascular space.

    Science.gov (United States)

    Sung, Jinkyeong; Jang, Jinhee; Choi, Hyun Seok; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-Soo

    2017-11-01

    To determine the prevalence of a linear sign within enlarged perivascular space (EPVS) and chronic lacunar infarction (CLI) ≥ 5 mm on T2-weighted imaging (T2WI) and time-of-flight (TOF) magnetic resonance angiography (MRA), and to evaluate the diagnostic value of the linear signs for EPVS over CLI. This study included 101 patients with cystic lesions ≥ 5 mm on brain MRI including TOF MRA. After classification of cystic lesions into EPVS or CLI, two readers assessed linear signs on T2WI and TOF MRA. We compared the prevalence and the diagnostic performance of linear signs. Among 46 EPVS and 51 CLI, 84 lesions (86.6%) were in basal ganglia. The prevalence of T2 and TOF linear signs was significantly higher in the EPVS than in the CLI (P linear signs showed high sensitivity (> 80%). TOF linear sign showed significantly higher specificity (100%) and accuracy (92.8% and 90.7%) than T2 linear sign (P linear signs were more frequently observed in EPVS than CLI. They showed high sensitivity in differentiation of them, especially for basal ganglia. TOF sign showed higher specificity and accuracy than T2 sign. • Linear sign is a suggestive feature of EPVS. • Time-of-flight magnetic resonance angiography can reveal the lenticulostriate artery within perivascular spaces. • Linear sign helps differentiation of EPVS and CLI, especially in basal ganglia.

  7. Evaluation of the accuracy of linear and angular measurements on panoramic radiographs taken at different positions

    Energy Technology Data Exchange (ETDEWEB)

    Nikneshan, Sima; Emadi, Naghmeh [Dept. of Oral and Maxillofacial Radiology, Dental School, Shahid Beheshti University of Medical Sciences, Tehran (Iran, Islamic Republic of); Sharafi, Mohamad [Dept. of Oral and Maxillofacial Radiology, Dental School, Ilam University of Medical Sciences, Ilam (Iran, Islamic Republic of)

    2013-09-15

    This study assessed the accuracy of linear and angular measurements on panoramic radiographs taken at different positions in vitro. Two acrylic models were fabricated from a cast with normal occlusion. Straight and 75 degree mesially and lingually angulated pins were placed, and standardized panoramic radiographs were taken at standard position, at an 8 degree downward tilt of the occlusal plane compared to the standard position, at an 8 degree upward tilt of the anterior occlusal plane, and at a 10 degree downward tilt of the right and left sides of the model. On the radiographs, the length of the pins above (crown) and below (root) the occlusal plane, total pin length, crown-to-root ratio, and angulation of pins relative to the occlusal plane were calculated. The data were subjected to repeated measures ANOVA and LSD multiple comparisons tests. Significant differences were noted between the radiographic measurements and true values in different positions on both models with linear (P<0.001) and those with angulated pins (P<0.005). No statistically significant differences were observed between the angular measurements and baselines of the natural head posture at different positions for the linear and angulated pins. Angular measurements on panoramic radiographs were sufficiently accurate and changes in the position of the occlusal plane equal to or less than 10 degree had no significant effect on them. Some variations could exist in the pin positioning (head positioning), and they were tolerable while taking panoramic radiographs. Linear measurements showed the least errors in the standard position and 8 degree upward tilt of the anterior part of the occlusal plane compared to other positions.

  8. The maximally achievable accuracy of linear optimal regulators and linear optimal filters

    NARCIS (Netherlands)

    Kwakernaak, H.; Sivan, Raphael

    1972-01-01

    A linear system with a quadratic cost function, which is a weighted sum of the integral square regulation error and the integral square input, is considered. What happens to the integral square regulation error as the relative weight of the integral square input reduces to zero is investigated. In

  9. Target Price Accuracy

    Directory of Open Access Journals (Sweden)

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  10. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Directory of Open Access Journals (Sweden)

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  11. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Science.gov (United States)

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  12. Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000

    Science.gov (United States)

    Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.

    2018-04-01

    The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.

  13. Accuracy of specific BIVA for the assessment of body composition in the United States population.

    Science.gov (United States)

    Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C; Marini, Elisabetta

    2013-01-01

    Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old) derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th) and above the 95(th) percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.

  14. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  15. Methodological quality of diagnostic accuracy studies on non-invasive coronary CT angiography: influence of QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) items on sensitivity and specificity

    International Nuclear Information System (INIS)

    Schueler, Sabine; Walther, Stefan; Schuetz, Georg M.; Schlattmann, Peter; Dewey, Marc

    2013-01-01

    To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item (''Uninterpretable Results'') showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with ''no fulfilment'' increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. (orig.)

  16. Methodological quality of diagnostic accuracy studies on non-invasive coronary CT angiography: influence of QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) items on sensitivity and specificity

    Energy Technology Data Exchange (ETDEWEB)

    Schueler, Sabine; Walther, Stefan; Schuetz, Georg M. [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite Medical School, Department of Radiology, Berlin (Germany); Schlattmann, Peter [University Hospital of Friedrich Schiller University Jena, Department of Medical Statistics, Informatics, and Documentation, Jena (Germany); Dewey, Marc [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite Medical School, Department of Radiology, Berlin (Germany); Charite, Institut fuer Radiologie, Berlin (Germany)

    2013-06-15

    To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item (''Uninterpretable Results'') showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with ''no fulfilment'' increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. (orig.)

  17. Accuracy of specific BIVA for the assessment of body composition in the United States population.

    Directory of Open Access Journals (Sweden)

    Roberto Buffa

    Full Text Available BACKGROUND: Bioelectrical impedance vector analysis (BIVA is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. OBJECTIVE: Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. DESIGN: A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th and above the 95(th percentiles of percent fat (FMDXA% and extracellular/intracellular water (ECW/ICW ratio were evaluated by receiver operating characteristic (ROC curves. Classic and specific BIVA results were compared by a probit multiple-regression. RESULTS: Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002. The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96 and similarly performed by the two procedures (p = 0.829. The accuracy of specific BIVA was similar in the two sexes (p = 0.144 and in FMDXA% and ECW/ICW (p = 0.869. CONCLUSIONS: Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.

  18. Specificity of the Linear Array HPV Genotyping Test for detecting human papillomavirus genotype 52 (HPV-52)

    OpenAIRE

    Kocjan, Boštjan; Poljak, Mario; Oštrbenk, Anja

    2015-01-01

    Introduction: HPV-52 is one of the most frequent human papillomavirus (HPV) genotypes causing significant cervical pathology. The most widely used HPV genotyping assay, the Roche Linear Array HPV Genotyping Test (Linear Array), is unable to identify HPV- 52 status in samples containing HPV-33, HPV-35, and/or HPV-58. Methods: Linear Array HPV-52 analytical specificity was established by testing 100 specimens reactive with the Linear Array HPV- 33/35/52/58 cross-reactive probe, but not with the...

  19. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Science.gov (United States)

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  20. An alternative test for verifying electronic balance linearity

    International Nuclear Information System (INIS)

    Thomas, I.R.

    1998-02-01

    This paper presents an alternative method for verifying electronic balance linearity and accuracy. This method is being developed for safeguards weighings (weighings for the control and accountability of nuclear material) at the Idaho National Engineering and Environmental Laboratory (INEEL). With regard to balance linearity and accuracy, DOE Order 5633.3B, Control and Accountability of Nuclear Materials, Paragraph 2, 4, e, (1), (a) Scales and Balances Program, states: ''All scales and balances used for accountability purposes shall be maintained in good working condition, recalibrated according to an established schedule, and checked for accuracy and linearity on each day that the scale or balance is used for accountability purposes.'' Various tests have been proposed for testing accuracy and linearity. In the 1991 Measurement Science Conference, Dr. Walter E. Kupper presented a paper entitled: ''Validation of High Accuracy Weighing Equipment.'' Dr. Kupper emphasized that tolerance checks for calibrated, state-of-the-art electronic equipment need not be complicated, and he presented four easy steps for verifying that a calibrated balance is operating correctly. These tests evaluate the standard deviation of successive weighings (of the same load), the off-center error, the calibration error, and the error due to nonlinearity. This method of balance validation is undoubtedly an authoritative means of ensuring balance operability, yet it could have two drawbacks: one, the test for linearity is not intuitively obvious, especially from a statistical viewpoint; and two, there is an absence of definitively defined testing limits. Hence, this paper describes an alternative means of verifying electronic balance linearity and accuracy that is being developed for safeguards measurements at the INEEL

  1. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    Science.gov (United States)

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  2. Effect of conductance linearity and multi-level cell characteristics of TaOx-based synapse device on pattern recognition accuracy of neuromorphic system

    Science.gov (United States)

    Sung, Changhyuck; Lim, Seokjae; Kim, Hyungjun; Kim, Taesu; Moon, Kibong; Song, Jeonghwan; Kim, Jae-Joon; Hwang, Hyunsang

    2018-03-01

    To improve the classification accuracy of an image data set (CIFAR-10) by using analog input voltage, synapse devices with excellent conductance linearity (CL) and multi-level cell (MLC) characteristics are required. We analyze the CL and MLC characteristics of TaOx-based filamentary resistive random access memory (RRAM) to implement the synapse device in neural network hardware. Our findings show that the number of oxygen vacancies in the filament constriction region of the RRAM directly controls the CL and MLC characteristics. By adopting a Ta electrode (instead of Ti) and the hot-forming step, we could form a dense conductive filament. As a result, a wide range of conductance levels with CL is achieved and significantly improved image classification accuracy is confirmed.

  3. Acute appendicitis: sensitivity, specificity and diagnostic accuracy of thin-section contrast-enhanced CT findings

    International Nuclear Information System (INIS)

    Lee, Ji Yon; Choi, Dong Il; Park, Hae Won; Lee, Young Rae; Kook, Shin Ho; Kwang, Hyon Joo; Kim, Seung Kwon; Chung, Eun Chul

    2002-01-01

    To assess the sensitivity, specificity, and diagnostic accuracy of individual contrast-enhanced helical CT findings of acute appendicitis. We retrospectively reviewed the appendiceal helical CT scans, obtained after intravenous contrast administration (abdomen; 7-mm collimation, abdominopelvic junction; 5-mm collimation), of 50 patients with surgically proven acute appendicitis and 112 with alternative diagnoses. The following parameters were analysed by three radiologists: enlarged appendix (>6 mm in diameter), appendiceal wall thickening, appendiceal wall enhancement, no identification of the appendix, appendicolith(s), (appendiceal) intraluminal air, abscess, lymphadenopathy, terminal ileal wall thickening, focal cecal apical thickening, focal colonic wall thickening, and segmental colonic wall thickening. The CT findings of acute appendicitis that statistically distinguished it from alternative diagnoses were an enlarged appendix (sensitivity; 92%, specificity; 93%, diagnostic accuracy; 93%), appendiceal wall thickening (for these three parameters: 68%, 96% and 88%, respectively), periappendiceal fat stranding (90%, 79%, 82%), appendiceal wall enhancement (72%, 86%, 82%), appendicolith (16%, 100%, 74%), and focal cecal apical thickening (14%, 100%, 74%) (for each, p<0305). On thin-section contrast-enhanced helical CT, an enlarged appendix and periappendiceal fat stranding were found in 90% or more patients with acute appendicitis. Appendiceal wall thickening and enhancement were alearly demonstrated and significant findings for diagnosis. Less common but specific findings include appendicolith, focal cecal apical thickening and intramural air, can also help us establish a diagnosis of acute appendicitis

  4. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  5. The accuracy of chest radiographs in the detection of congenital heart disease and in the diagnosis of specific congenital cardiac lesions

    International Nuclear Information System (INIS)

    Laya, Bernard F.; Goske, Marilyn J.; Morrison, Stuart; Reid, Janet R.; Swischuck, Leonard; Ey, Elizabeth H.; Murphy, Daniel J.; Lieber, Michael; Obuchowski, Nancy

    2006-01-01

    Congenital heart disease (CHD) is a significant cause of morbidity and mortality in pediatric patients. Traditional teaching holds that specific types of CHD can be diagnosed on the chest radiograph (CXR) through pattern recognition. To determine the accuracy of radiologists in detecting CHD on the CXR. This study was a blinded retrospective review of chest radiographs from 281 patients (<12 years) by five pediatric radiologists from three institutions. Thirteen groups were evaluated that included 12 categories of CHD and a control group of patients without heart disease. Radiographs were assessed for heart size, heart and mediastinal shape and vascularity. Clinical information, angiography, echocardiograms and surgery were used as the gold standard for definitive diagnosis. The average accuracy of the five readers in distinguishing normal from CHD patients was 78% (range of 72% to 82%). The overall measure of accuracy in distinguishing specific congenital cardiac lesions among 13 groups of patients was 71% (range of 63% to 79%). CXR alone is not diagnostic of specific cardiac lesions, with a low accuracy of only 71%. We believe that less emphasis should be placed on the use of radiographs alone in diagnosing specific congenital cardiac lesions. (orig.)

  6. The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions

    Energy Technology Data Exchange (ETDEWEB)

    Ganguly, Rumpa; Ramesh, Aruna; Pagni, Sarah [Tufts University School of Dental Medicine, Boston (United States)

    2016-06-15

    The objective of this study was to investigate the effect of varying resolutions of cone-beam computed tomography images on the accuracy of linear measurements of edentulous areas in human cadaver heads. Intact cadaver heads were used to simulate a clinical situation. Fiduciary markers were placed in the edentulous areas of 4 intact embalmed cadaver heads. The heads were scanned with two different CBCT units using a large field of view (13 cm×16 cm) and small field of view (5 cm×8 cm) at varying voxel sizes (0.3 mm, 0.2 mm, and 0.16 mm). The ground truth was established with digital caliper measurements. The imaging measurements were then compared with caliper measurements to determine accuracy. The Wilcoxon signed rank test revealed no statistically significant difference between the medians of the physical measurements obtained with calipers and the medians of the CBCT measurements. A comparison of accuracy among the different imaging protocols revealed no significant differences as determined by the Friedman test. The intraclass correlation coefficient was 0.961, indicating excellent reproducibility. Inter-observer variability was determined graphically with a Bland-Altman plot and by calculating the intraclass correlation coefficient. The Bland-Altman plot indicated very good reproducibility for smaller measurements but larger discrepancies with larger measurements. The CBCT-based linear measurements in the edentulous sites using different voxel sizes and FOVs are accurate compared with the direct caliper measurements of these sites. Higher resolution CBCT images with smaller voxel size did not result in greater accuracy of the linear measurements.

  7. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  8. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Science.gov (United States)

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Linear Discriminant Analysis achieves high classification accuracy for the BOLD fMRI response to naturalistic movie stimuli.

    Directory of Open Access Journals (Sweden)

    Hendrik eMandelkow

    2016-03-01

    Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  10. Accuracies Of Optical Processors For Adaptive Optics

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  11. Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.

    Science.gov (United States)

    Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon

    2014-10-01

    In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees

  12. Linearized method: A new approach for kinetic analysis of central dopamine D2 receptor specific binding

    International Nuclear Information System (INIS)

    Watabe, Hiroshi; Hatazawa, Jun; Ishiwata, Kiichi; Ido, Tatsuo; Itoh, Masatoshi; Iwata, Ren; Nakamura, Takashi; Takahashi, Toshihiro; Hatano, Kentaro

    1995-01-01

    The authors proposed a new method (Linearized method) to analyze neuroleptic ligand-receptor specific binding in a human brain using positron emission tomography (PET). They derived the linear equation to solve four rate constants, k 3 , k 4 , k 5 , k 6 from PET data. This method does not demand radioactivity curve in plasma as an input function to brain, and can do fast calculations in order to determine rate constants. They also tested Nonlinearized method including nonlinear equations which is conventional analysis using plasma radioactivity corrected for ligand metabolites as an input function. The authors applied these methods to evaluate dopamine D 2 receptor specific binding of [ 11 C] YM-09151-2. The value of B max /K d = k 3 k 4 obtained by Linearized method was 5.72 ± 3.1 which was consistent with the value of 5.78 ± 3.4 obtained by Nonlinearized method

  13. The Use of Linear Programming for Prediction.

    Science.gov (United States)

    Schnittjer, Carl J.

    The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

  14. The robustness and accuracy of in vivo linear wear measurements for knee prostheses based on model-based RSA.

    Science.gov (United States)

    van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L

    2011-10-13

    Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  16. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Science.gov (United States)

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  17. Linear and non-linear autoregressive models for short-term wind speed forecasting

    International Nuclear Information System (INIS)

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  18. Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification.

    Science.gov (United States)

    Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong

    2017-01-01

    This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. INFLUENCE OF STRUCTURE COMPONENTS ON MACHINE TOOL ACCURACY

    Directory of Open Access Journals (Sweden)

    ConstantinSANDU

    2017-11-01

    Full Text Available For machine tools, the accuracy of the parts of the machine tool structure (after roughing should be subject to relief and natural or artificial aging. The performance of the current accuracy of machine tools as linearity or flatness was higher than 5 μm/m. Under this value there are great difficulties. The performance of the structure of the machine tools in the manufacture of structural parts of machine tools, with a flatness accuracy that the linearity of about 2 μm/m, are significant deviations form of their half-finished. This article deals with the influence of errors of form of semifinished and machined parts on them, on their shape and especially what happens to structure machine tools when the components of the structure were assembling this.

  20. Surgical accuracy of three-dimensional virtual planning

    DEFF Research Database (Denmark)

    Stokbro, Kasper; Aagaard, Esben; Torkov, Peter

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy...... and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation...

  1. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  2. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    Science.gov (United States)

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  3. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material.

    Science.gov (United States)

    Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena

    2013-12-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly

  4. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  5. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    Science.gov (United States)

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  6. Assessment of the Roche Linear Array HPV Genotyping Test within the VALGENT framework.

    Science.gov (United States)

    Xu, Lan; Oštrbenk, Anja; Poljak, Mario; Arbyn, Marc

    2018-01-01

    Cervical cancer screening programs are switching from cytology-based screening to high-risk (hr) HPV testing. Only clinically validated tests should be used in clinical practice. To assess the clinical performance of the Roche Linear Array HPV genotyping test (Linear Array) within the VALGENT-3 framework. The VALGENT framework is designed for comprehensive comparison and clinical validation of HPV tests that have limited to extended genotyping capacity. The Linear Array enables type-specific detection of 37 HPV types. For the purpose of this study, Linear Array results were designated as positive only if one of the 13 hrHPV types also included in the Hybrid Capture 2 (HC2) was detected. The VALGENT-3 framework comprised 1600 samples obtained from Slovenian women (1300 sequential cases from routine cervical cancer screening enriched with 300 cytological abnormal samples). Sensitivity for cervical intraepithelial neoplasia of grade 2 or worse (CIN2+) (n=127) and specificity for Linear Array and for HC2 and non-inferiority of Linear Array relative to HC2 was checked. In addition, the prevalence of separate hrHPV types in the screening population, as well as the concordance for presence of HPV16, HPV18 and other hrHPV types between Linear Array and the Abbott RealTime High Risk HPV test (RealTime) were assessed. The clinical sensitivity and specificity for CIN2+ of the Linear Array in the total study population was 97.6% (95% CI, 93.3-99.5%) and 91.7% (95% CI, 90.0-93.2%), respectively. The relative sensitivity and specificity of Linear Array vs HC2 was 1.02 [95% CI, 0.98-1.05, (pLinear Array in the screening population was 10.5% (95% CI, 8.9-12.3%) with HPV16 and HPV18 detected in 2.3% and 0.9% of the samples, respectively. Excellent agreement for presence or absence of HPV16, HPV18 and other hrHPV between Linear Array and RealTime was observed. Linear Array showed similar sensitivity with higher specificity to detect CIN2+ compared to HC2. Detection of 13 hrHPV types

  7. Inclusion of Population-specific Reference Panel from India to the 1000 Genomes Phase 3 Panel Improves Imputation Accuracy.

    Science.gov (United States)

    Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R

    2017-07-27

    Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.

  8. Integration of Attributes from Non-Linear Characterization of Cardiovascular Time-Series for Prediction of Defibrillation Outcomes.

    Directory of Open Access Journals (Sweden)

    Sharad Shandilya

    Full Text Available The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR, rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals.Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA technique.358 defibrillations were evaluated (218 unsuccessful and 140 successful. Non-linear properties (Lyapunov exponent > 0 of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity outperformed AMSA (53.6% sensitivity. At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity.At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations

  9. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    Science.gov (United States)

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  10. Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon

    2016-01-01

    In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....

  11. Application of Nearly Linear Solvers to Electric Power System Computation

    Science.gov (United States)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  12. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  13. The linear sizes tolerances and fits system modernization

    Science.gov (United States)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  14. Improving shuffler assay accuracy

    International Nuclear Information System (INIS)

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  15. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  16. Accuracy assessment of an industrial actuator

    DEFF Research Database (Denmark)

    Dalla Costa, Giuseppe; Genta, Gianfranco; Barbato, Giulio

    2016-01-01

    A commercial linear actuator equipped with a 0.1 μm resolution encoder was used as a contact displacement sensor with adjustable force. The accuracy of the position reading of the actuator was evaluated from experimental data taking into account the uncertainty contributions. The tests consisted ...

  17. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    Science.gov (United States)

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  18. Accuracy and reliability of stitched cone-beam computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Egbert, Nicholas [Private Practice, Reconstructive Dental Specialists of Utah, Salt Lake (United States); Cagna, David R.; Ahuja, Swati; Wicks, Russell A. [Dept. of rosthodontics, University of Tennessee Health Science Center College of Dentistry, Memphis (United States)

    2015-03-15

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  19. Accuracy and reliability of stitched cone-beam computed tomography images

    International Nuclear Information System (INIS)

    Egbert, Nicholas; Cagna, David R.; Ahuja, Swati; Wicks, Russell A.

    2015-01-01

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  20. Accuracy and reliability of stitched cone-beam computed tomography images.

    Science.gov (United States)

    Egbert, Nicholas; Cagna, David R; Ahuja, Swati; Wicks, Russell A

    2015-03-01

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  1. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)

  2. Optimization of drift gases for accuracy in pressurized drift tubes

    CERN Document Server

    Kirchner, J J; Dinner, A R; Fidkowski, K J; Wyatt, J H

    2001-01-01

    Modern detectors such as ATLAS use pressurized drift tubes to minimize diffusion and achieve high coordinate accuracy. However, the coordinate accuracy depends on the exact knowledge of converting measured times into coordinates. Linear space-time relationships are best for reconstruction, but difficult to achieve in the $E \\propto \\frac{1}{r}$ field. Previous mixtures, which contained methane or other organic quenchers, are disfavored because of ageing problems. From our studies of nitrogen and carbon dioxide, two mixtures with only small deviations from linearity were determined and measured. Scaling laws for different pressures and magnetic fields are also given.

  3. Optimization of drift gases for accuracy in pressurized drift tubes

    International Nuclear Information System (INIS)

    Kirchner, J.J.; Becker, U.J.; Dinner, R.B.; Fidkowski, K.J.; Wyatt, J.H.

    2001-01-01

    Modern detectors such as ATLAS use pressurized drift tubes to minimize diffusion and achieve high coordinate accuracy. However, the coordinate accuracy depends on the exact knowledge of converting measured times into coordinates. Linear space-time relationships are best for reconstruction, but difficult to achieve in the E∝1/r field. Previous mixtures, which contained methane or other organic quenchers, are disfavored because of ageing problems. From our studies of nitrogen and carbon dioxide, two mixtures with only small deviations from linearity were determined and measured. Scaling laws for different pressures and magnetic fields are also given

  4. Modelling a linear PM motor including magnetic saturation

    NARCIS (Netherlands)

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  5. Accuracy of prehospital transport time estimation.

    Science.gov (United States)

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  6. Imaging modalities in the diagnosis of pancreatic adenocarcinoma: A systematic review and meta-analysis of sensitivity, specificity and diagnostic accuracy.

    Science.gov (United States)

    Toft, James; Hadden, William J; Laurence, Jerome M; Lam, Vincent; Yuen, Lawrence; Janssen, Anna; Pleass, Henry

    2017-07-01

    Pancreatic cancer, primarily pancreatic ductal adenocarcinoma (PDAC), accounts for 2.4% of cancer diagnoses and 5.8% of cancer death annually. Early diagnoses can improve 5-year survival in PDAC. The aim of this systematic review was to determine the sensitivity, specificity and diagnostic accuracy values for MRI, CT, PET&PET/CT, EUS and transabdominal ultrasound (TAUS) in the diagnosis of PDAC. A systematic review was undertaken to identify studies reporting sensitivity, specificity and/or diagnostic accuracy for the diagnosis of PDAC with MRI, CT, PET, EUS or TAUS. Proportional meta-analysis was performed for each modality. A total of 5399 patients, 3567 with PDAC, from 52 studies were included. The sensitivity, specificity and diagnostic accuracy were 93% (95% CI=88-96), 89% (95% CI=82-94) and 90% (95% CI=86-94) for MRI; 90% (95% CI=87-93), 87% (95% CI=79-93) and 89% (95% CI=85-93) for CT; 89% (95% CI=85-93), 70% (95% CI=54-84) and 84% (95% CI=79-89) for PET; 91% (95% CI=87-94), 86% (95% CI=81-91) and 89% (95% CI=87-92) for EUS; and 88% (95% CI=86-90), 94% (95% CI=87-98) and 91% (95% C=87-93) for TAUS. This review concludes all modalities, except for PET, are equivalent within 95% confidence intervals for the diagnosis of PDAC. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Sensitivity, Specificity, Predictive Values, and Accuracy of Three Diagnostic Tests to Predict Inferior Alveolar Nerve Blockade Failure in Symptomatic Irreversible Pulpitis

    Directory of Open Access Journals (Sweden)

    Daniel Chavarría-Bolaños

    2017-01-01

    Full Text Available Introduction. The inferior alveolar nerve block (IANB is the most common anesthetic technique used on mandibular teeth during root canal treatment. Its success in the presence of preoperative inflammation is still controversial. The aim of this study was to evaluate the sensitivity, specificity, predictive values, and accuracy of three diagnostic tests used to predict IANB failure in symptomatic irreversible pulpitis (SIP. Methodology. A cross-sectional study was carried out on the mandibular molars of 53 patients with SIP. All patients received a single cartridge of mepivacaine 2% with 1 : 100000 epinephrine using the IANB technique. Three diagnostic clinical tests were performed to detect anesthetic failure. Anesthetic failure was defined as a positive painful response to any of the three tests. Sensitivity, specificity, predictive values, accuracy, and ROC curves were calculated and compared and significant differences were analyzed. Results. IANB failure was determined in 71.7% of the patients. The sensitivity scores for the three tests (lip numbness, the cold stimuli test, and responsiveness during endodontic access were 0.03, 0.35, and 0.55, respectively, and the specificity score was determined as 1 for all of the tests. Clinically, none of the evaluated tests demonstrated a high enough accuracy (0.30, 0.53, and 0.68 for lip numbness, the cold stimuli test, and responsiveness during endodontic access, resp.. A comparison of the areas under the curve in the ROC analyses showed statistically significant differences between the three tests (p<0.05. Conclusion. None of the analyzed tests demonstrated a high enough accuracy to be considered a reliable diagnostic tool for the prediction of anesthetic failure.

  8. Can Automatic Classification Help to Increase Accuracy in Data Collection?

    Directory of Open Access Journals (Sweden)

    Frederique Lang

    2016-09-01

    Full Text Available Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets. Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM, Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms. Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%, which may reduce the manual work needed for classification tasks. Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers. Practical implications: Although the

  9. Diagnostic accuracy of repetition tasks for the identification of specific language impairment (SLI) in bilingual children: evidence from Russian and Hebrew.

    Science.gov (United States)

    Armon-Lotem, Sharon; Meir, Natalia

    2016-11-01

    Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language

  10. Targeting Accuracy of Image-Guided Radiosurgery for Intracranial Lesions: A Comparison Across Multiple Linear Accelerator Platforms.

    Science.gov (United States)

    Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Brown, Stephen; Gordon, James; Wen, Ning

    2016-04-01

    To evaluate the overall positioning accuracy of image-guided intracranial radiosurgery across multiple linear accelerator platforms. A computed tomography scan with a slice thickness of 1.0 mm was acquired of an anthropomorphic head phantom in a BrainLAB U-frame mask. The phantom was embedded with three 5-mm diameter tungsten ball bearings, simulating a central, a left, and an anterior cranial lesion. The ball bearings were positioned to radiation isocenter under ExacTrac X-ray or cone-beam computed tomography image guidance on 3 Linacs: (1) ExacTrac X-ray localization on a Novalis Tx; (2) cone-beam computed tomography localization on the Novalis Tx; (3) cone-beam computed tomography localization on a TrueBeam; and (4) cone-beam computed tomography localization on an Edge. Each ball bearing was positioned 5 times to the radiation isocenter with different initial setup error following the 4 image guidance procedures on the 3 Linacs, and the mean (µ) and one standard deviation (σ) of the residual error were compared. Averaged overall 3 ball bearing locations, the vector length of the residual setup error in mm (µ ± σ) was 0.6 ± 0.2, 1.0 ± 0.5, 0.2 ± 0.1, and 0.3 ± 0.1 on ExacTrac X-ray localization on a Novalis Tx, cone-beam computed tomography localization on the Novalis Tx, cone-beam computed tomography localization on a TrueBeam, and cone-beam computed tomography localization on an Edge, with their range in mm being 0.4 to 1.1, 0.4 to 1.9, 0.1 to 0.5, and 0.2 to 0.6, respectively. The congruence between imaging and radiation isocenters in mm was 0.6 ± 0.1, 0.7 ± 0.1, 0.3 ± 0.1, and 0.2 ± 0.1, for the 4 systems, respectively. Targeting accuracy comparable to frame-based stereotactic radiosurgery can be achieved with image-guided intracranial stereotactic radiosurgery treatment. © The Author(s) 2015.

  11. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    Science.gov (United States)

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Improved Haptic Linear Lines for Better Movement Accuracy in Upper Limb Rehabilitation

    Directory of Open Access Journals (Sweden)

    Joan De Boeck

    2012-01-01

    Full Text Available Force feedback has proven to be beneficial in the domain of robot-assisted rehabilitation. According to the patients' personal needs, the generated forces may either be used to assist, support, or oppose their movements. In our current research project, we focus onto the upper limb training for MS (multiple sclerosis and CVA (cerebrovascular accident patients, in which a basic building block to implement many rehabilitation exercises was found. This building block is a haptic linear path: a second-order continuous path, defined by a list of points in space. Earlier, different attempts have been investigated to realize haptic linear paths. In order to have a good training quality, it is important that the haptic simulation is continuous up to the second derivative while the patient is enforced to follow the path tightly, even when low or no guiding forces are provided. In this paper, we describe our best solution to these haptic linear paths, discuss the weaknesses found in practice, and propose and validate an improvement.

  13. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  14. Accuracy of computer-assisted cervicle pedicle screw installation

    International Nuclear Information System (INIS)

    Zhang Honglei; Zhou Dongsheng; Jang Zhensong

    2009-01-01

    Objective: To investigate the accuracy of computer-assisted cervical pedicle screw installation and the reason of screw malposition. Methods: A total of 172 cervical pedicle screws were installed by computer-assisted navigation for 30 patients with lower cervical spinal diseases. All the patients were examined by X-ray and CT after operation. Screw's position and direction were measured on the sagittal and transectional images of intraoperative navigation and post-operative CT. Then linear regression analysis was taken between navigational and post-operative CT's images. Results: Two screws perforated the upper pedicle wall, 3 perforated the lateral pedicle wall.There was a positive linear correlation between navigational and post-operative CT's images. Conclusion: Computer-assisted navigation can provide the high accuracy of cervical pedicle screw installation and excursion phenomenon is reason of screw malposition. (authors)

  15. Comparative evaluation of the accuracy of linear measurements between cone beam computed tomography and 3D microtomography

    Directory of Open Access Journals (Sweden)

    Francesca Mangione

    2013-09-01

    Full Text Available OBJECTIVE: The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT system used in dental clinical practice, by comparing it with microCT system as standard reference. MATERIALS AND METHODS: Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. RESULTS AND DISCUSSION: Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. CONCLUSION: With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.

  16. A linear maglev guide for machine tools

    Energy Technology Data Exchange (ETDEWEB)

    Tieste, K D [Inst. of Mechanics, Univ. of Hannover (Germany); Popp, K [Inst. of Mechanics, Univ. of Hannover (Germany)

    1996-12-31

    Machine tools require linear guides with high slide velocity and very high position accuracy. The three tasks of a linear guide - supporting, guiding and driving - shall be realised by means of active magnetic bearings (AMB). The resulting linear magnetically levitated (maglev) guide has to accomplish the following characteristics: High stiffness, good damping and low noise as well as low heat production. First research on a one degree-of-freedom (DOF) support magnet unit aimed at the development of components and efficient control strategies for the linear maglev guide. The actual research is directed to realise a five DOF linear maglev guide for machine tools without drive to answer the question whether the maglev principle can be used for a linear axis in a machine tool. (orig.)

  17. Poster - Thur Eve - 50: Planning and delivery accuracy of stereotactic radiosurgery with Tomotherapy as compared to linear-accelerator and robotic based radiosurgery.

    Science.gov (United States)

    Thakur, V; Soisson, E; Ruo, R; Doucet, R; Parker, W; Seuntjens, J

    2012-07-01

    This study includes planning and delivery comparison of three stereotactic radiosurgery techniques : Helical Tomotherapy (HT), circular collimator-based Linear-accelerator and robotic-radiosurgery. Plans were generated for two spherical targets of diameter 6 mm and 10 mm contoured at the center of a Lucite phantom, using similar planning constrains. Planning comparison showed that average conformality (0-1best) for Linear-accelerator, robotic-radiosurgery and HT was 1.43, 1.24, and 1.77 and gradient index (less is better) was 2.72, 4.50 and 13.56 respectively. For delivery comparison, plans were delivered to radiochromic film and measured dose was compared with the planned dose. For Linear-accelerator and robotic-radiosurgery more than 99% pixels-passing a gamma criteria of 3% dose difference and 1 mm distance to agreement where as for HT this value was as low as 40% for off-axis targets. Further investigation of the delivery accuracy as a function of the location of the target with in the bore was initiated using small volume A1SL (0.057 cm 3 ) and MicroLion liquid ion chamber (0.0017 cm 3 ). Point dose measurements for targets located at the center and 10 cm away from the center of the bore showed that delivered dose varied by more than 15% for targets placed away from the center of the bore as opposed to at the center. In conclusion, Linear-accelerator and the robotic-radiosurgery techniques showed preferable gradient and conformality. For HT, point dose measurements were significantly lower than predicted by the TPS when the target was positioned away from the isocenter, while they were found to be higher at isocenter. © 2012 American Association of Physicists in Medicine.

  18. Accuracy and repeatability of anthropometric facial measurements using cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O.; Ren, Yijin

    Objective: The purpose of this study was to determine the accuracy and repeatability of linear anthropometric measurements on the soft tissue surface model generated from cone beam computed tomography scans. Materials and Methods: The study sample consisted of seven cadaver heads. The accuracy and

  19. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  20. Interspecies hybridization on DNA resequencing microarrays: efficiency of sequence recovery and accuracy of SNP detection in human, ape, and codfish mitochondrial DNA genomes sequenced on a human-specific MitoChip

    Directory of Open Access Journals (Sweden)

    Carr Steven M

    2007-09-01

    Full Text Available Abstract Background Iterative DNA "resequencing" on oligonucleotide microarrays offers a high-throughput method to measure intraspecific biodiversity, one that is especially suited to SNP-dense gene regions such as vertebrate mitochondrial (mtDNA genomes. However, costs of single-species design and microarray fabrication are prohibitive. A cost-effective, multi-species strategy is to hybridize experimental DNAs from diverse species to a common microarray that is tiled with oligonucleotide sets from multiple, homologous reference genomes. Such a strategy requires that cross-hybridization between the experimental DNAs and reference oligos from the different species not interfere with the accurate recovery of species-specific data. To determine the pattern and limits of such interspecific hybridization, we compared the efficiency of sequence recovery and accuracy of SNP identification by a 15,452-base human-specific microarray challenged with human, chimpanzee, gorilla, and codfish mtDNA genomes. Results In the human genome, 99.67% of the sequence was recovered with 100.0% accuracy. Accuracy of SNP identification declines log-linearly with sequence divergence from the reference, from 0.067 to 0.247 errors per SNP in the chimpanzee and gorilla genomes, respectively. Efficiency of sequence recovery declines with the increase of the number of interspecific SNPs in the 25b interval tiled by the reference oligonucleotides. In the gorilla genome, which differs from the human reference by 10%, and in which 46% of these 25b regions contain 3 or more SNP differences from the reference, only 88% of the sequence is recoverable. In the codfish genome, which differs from the reference by > 30%, less than 4% of the sequence is recoverable, in short islands ≥ 12b that are conserved between primates and fish. Conclusion Experimental DNAs bind inefficiently to homologous reference oligonucleotide sets on a re-sequencing microarray when their sequences differ by

  1. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  3. A comparative study of linear measurements on facial skeleton with frontal and lateral cephalogram.

    Science.gov (United States)

    Gandikota, Chandra Sekhar; Rayapudi, Naveen; Challa, Padma Latha; Juvvadi, Shubhaker Rao; Yudhister, P V; Rao, Gutti Hariprasad

    2012-04-01

    To compare the accuracy of linear measurements on lateral and frontal cephalograms with gold standard skull measurements. Based on the specific criteria including reliable occlusion and condyles fitting in glenoid fossa, 15 dry human skulls were selected from a larger collection. Lateral and frontal cephalograms were taken of each skull by standardized methods. Steel ball bearings were used to identify the anatomic landmarks. Linear measurements in midsagittal plane were made on all three records. Intraclass correlation coefficients, Pearson's correlation coefficient and regression constant were calculated to assess the records simultaneously. The frontal cephalometric measurements showed high correlation to the direct skull measurements (Pearson's coefficient 0.943linear measurements of the lateral cephalometric record are greater than the corresponding frontal cephalometric images. The overall findings of the present study showed that the frontal cephalometric measurements are closely related to the direct skull measures.

  4. Computed tomography angiogram. Accuracy in renal surgery

    International Nuclear Information System (INIS)

    Rabah, Danny M.; Al-Hathal, Naif; Al-Fuhaid, Turki; Raza, Sayed; Al-Yami, Fahad; Al-Taweel, Waleed; Alomar, Mohamed; Al-Nagshabandi, Nizar

    2009-01-01

    The objective of this study was to determine the sensitivity and specificity of computed tomography angiogram (CTA) in detecting number and location of renal arteries and veins as well as crossing vessels causing uretero-pelvic junction obstruction (UPJO), and to determine if this can be used in decision-making algorithms for treatment of UPJO. A prospective study was carried out in patients undergoing open, laparoscopic and robotic renal surgery from April 2005 until October 2006. All patients were imaged using CTA with 1.25 collimation of arterial and venous phases. Each multi-detector CTA was then read by one radiologist and his results were compared prospectively with the actual intra-operative findings. Overall, 118 patients were included. CTA had 93% sensitivity, 77% specificity and 90% overall accuracy for detecting a single renal artery, and 76% sensitivity, 92% specificity and 90% overall accuracy for detecting two or more renal arteries (Pearson χ 2 =0.001). There was 95% sensitivity, 84% specificity and 85% overall accuracy for detecting the number of renal veins. CTA had 100% overall accuracy in detecting early dividing renal artery (defined as less than 1.5 cm branching from origin), and 83.3% sensitivity, specificity and overall accuracy in detecting crossing vessels at UPJ. The percentage of surgeons stating CTA to be helpful as pre-operative diagnostic tool was 85%. Computed tomography angiogram is simple, quick and can provide an accurate pre-operative renal vascular anatomy in terms of number and location of renal vessels, early dividing renal arteries and crossing vessels at UPJ. (author)

  5. Direct Linear Transformation Method for Three-Dimensional Cinematography

    Science.gov (United States)

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  6. Linear signal noise summer accurately determines and controls S/N ratio

    Science.gov (United States)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  7. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  8. Linearization of the bradford protein assay.

    Science.gov (United States)

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  9. A comparative evaluation of linear dimensional accuracy of the dies obtained using three conceptually different die systems in the fabrication of implant prosthesis: An in vitro study

    Directory of Open Access Journals (Sweden)

    Manawar Ahmad

    2014-01-01

    Purpose: The purpose of the study was to evaluate the linear dimensional accuracy between the implant master die and three conceptually different die systems such as Pindex system, Accu-trac precision die system, and Conventional brass dowel pin system. Materials and Methods: Thirty impressions of implant master die were made with polyether impression material. Ten experimental implant casts were fabricated for each of the three different die systems tested: Accu-trac precision die tray system, Pindex system, and conventional brass dowel pin system. The solid experimental casts were sectioned and then removed from the die system 30 times. Linear distances between all six possible distances were measured from one centre of the transfer coping to the other, using a co-ordinate measuring machine in millimeters up to accuracy of 0.5 microns. Data were tabulated and statistically analyzed by Binomial non parametric test using SPSS version 15. Results: Significant differences were found for distance A-B (P = 0.002, A-C ( P = 0.002, A-D (P value = 0.002, and B-D ( P = 0.021 in Conventional Dowel pin system however for Accu-trac precision die tray system, it was significant only for distance A-D (P = 0.002 but for Pindex system it was non-significant for all the distances measured. Conclusion: Within the limitations of this study, use of Pindex system is recommended when sectioned dies are needed for a multi implant retained prosthesis.

  10. Structural linear measurements in the newborn brain: accuracy of cranial ultrasound compared to MRI

    International Nuclear Information System (INIS)

    Leijser, Lara M.; Srinivasan, Latha; Cowan, Frances M.; Rutherford, Mary A.; Counsell, Serena J.; Allsop, Joanna M.

    2007-01-01

    Structural size in the neonatal brain is of clinical importance. Cranial ultrasonography (cUS) is the primary method used for evaluating the neonatal brain and it is important to know whether linear measurements made using this technique are accurate. To compare linear measurements of different cerebral structures made from neonatal cUS and contemporaneous MRI. Preterm and term infants studies with cUS and MRI on the same day were studied. Linear measurements made using both techniques from many cerebral structures were compared using a paired t-test. A total of 44 sets of scans from 26 preterm and 8 term infants were assessed. Small but significant differences between the cUS and MRI measurements (P<0.05) were found for the ventricular index, the posterior horn depth of the lateral ventricle, the extracerebral space and interhemispheric fissure, and the cortex of the cingulate gyrus. No significant differences were found for any other measurements. Linear measurements from cUS are accurate for most neonatal cerebral structures. Significant differences compared to MRI were found for a few structures, but only for the cortex were the absolute differences marked and possibly of clinical importance. (orig.)

  11. Linear algebra

    CERN Document Server

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  12. Linear Temporal Logic-based Mission Planning

    OpenAIRE

    Anil Kumar; Rahul Kala

    2016-01-01

    In this paper, we describe the Linear Temporal Logic-based reactive motion planning. We address the problem of motion planning for mobile robots, wherein the goal specification of planning is given in complex environments. The desired task specification may consist of complex behaviors of the robot, including specifications for environment constraints, need of task optimality, obstacle avoidance, rescue specifications, surveillance specifications, safety specifications, etc. We use Linear Tem...

  13. A comparative study of linear measurements on facial skeleton with frontal and lateral cephalogram

    Directory of Open Access Journals (Sweden)

    Chandra Sekhar Gandikota

    2012-01-01

    Full Text Available Objective: To compare the accuracy of linear measurements on lateral and frontal cephalograms with gold standard skull measurements . Materials and Methods: Based on the specific criteria including reliable occlusion and condyles fitting in glenoid fossa, 15 dry human skulls were selected from a larger collection. Lateral and frontal cephalograms were taken of each skull by standardized methods. Steel ball bearings were used to identify the anatomic landmarks. Linear measurements in midsagittal plane were made on all three records. Intraclass correlation coefficients, Pearson′s correlation coefficient and regression constant were calculated to assess the records simultaneously. Results: The frontal cephalometric measurements showed high correlation to the direct skull measurements (Pearson′s coefficient 0.943linear measurements of the lateral cephalometric record are greater than the corresponding frontal cephalometric images. The overall findings of the present study showed that the frontal cephalometric measurements are closely related to the direct skull measures.

  14. Accuracy enhancement of point triangulation probes for linear displacement measurement

    Science.gov (United States)

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  15. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy.

    Science.gov (United States)

    Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A

    2013-02-01

    Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined

  16. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    International Nuclear Information System (INIS)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-01-01

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors’ unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  17. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  18. Development of pre-critical excore detector linear subchannel calibration method

    International Nuclear Information System (INIS)

    Choi, Yoo Sun; Goo, Bon Seung; Cha, Kyun Ho; Lee, Chang Seop; Kim, Yong Hee; Ahn, Chul Soo; Kim, Man Soo

    2001-01-01

    The improved pre-critical excore detector linear subchannel calibration method has been developed to improve the applicability of pre-critical calibration method. The existing calibration method does not always guarantee the accuracy of pre-critical calibration because the calibration results of the previous cycle are not reflected into the current cycle calibration. The developed method has a desirable feature that calibration error would not be propagated in the following cycles since the calibration data determined in previous cycle is incorporated in the current cycle calibration. The pre-critical excore detector linear calibration is tested for YGN unit 3 and UCN unit 3 to evaluate its characteristics and accuracy

  19. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    Science.gov (United States)

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  20. Accuracy analysis of indirect georeferencing about TH-1 satellite in Weinan test area

    International Nuclear Information System (INIS)

    Yunlan, Yang; Haiyan, Hu

    2014-01-01

    Optical linear scanning sensors can be divided into single-lens sensors and multi-lens sensors according to the number of lenses. In order to build stereo imaging, for single-lens optical systems such as aerial mapping camera ADS40 and ADS80, there are more than two parallel linear arrays placed on the focal plane. And for a multi-lens optical system there is only one linear CCD arrays placed on the center of every focal plan for each lens which is often carried on spacecraft. The difference of design between these two kinds of optical systems leads to the systematic errors, calibration in orbit and approach of data adjustment are different completely. Recent years the domestic space optical sensor systems are focused on multi-lens linear CCD sensor in China, such as TH-1 and ZY-3 both belong to multi-lens optical systems. The parameters influencing the position accuracy of the satellite system which are unknown or unknown precisely even changed after sensor posted launch can be estimated by self-calibration in orbit. So after self-calibration in orbit the accuracy of mapping satellite will often be improved strongly. Comparing to direct georeferencing, the indirect georeferencing as a research approach is introduced to TH-1 satellite in this paper considering the systematic errors completely. Parameters about geometry position systematic error are introduced to the basic co-linearity equations for multi-lenses linear array CCD sensor, and based on the extended model the method of space multi-lens linear array CCD sensor self-calibration bundle adjustment is presented. The test field is in some area of Weinan, Shaanxi province, and the observation data of GCPs and orbit are collected. The extended rigors model is used in bundle adjustment and the accuracy analysis shown that TH-1 has a satisfied metric performance

  1. Quantification of endocrine disruptors and pesticides in water by gas chromatography-tandem mass spectrometry. Method validation using weighted linear regression schemes.

    Science.gov (United States)

    Mansilha, C; Melo, A; Rebelo, H; Ferreira, I M P L V O; Pinho, O; Domingues, V; Pinho, C; Gameiro, P

    2010-10-22

    A multi-residue methodology based on a solid phase extraction followed by gas chromatography-tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC-MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. SLAC linear collider and a few ideas on future linear colliders

    International Nuclear Information System (INIS)

    Loew, G.A.

    1984-04-01

    This paper comes in two parts. The first part is a progress report on the SLAC Linear Collider (SLC) with emphasis on those systems which are of special interest to linear accelerator designers; it sets the stage for a number of contributed papers on specific topics which are also presented at this conference. The second part presents some ideas which are of interest to the design of future linear colliders of higher energies

  3. Clinical accuracy of a patient-specific femoral osteotomy guide in minimally-invasive posterior hip arthroplasty.

    Science.gov (United States)

    Schneider, Adrian K; Pierrepont, Jim W; Hawdon, Gabrielle; McMahon, Stephen

    2018-04-01

    Patient specific guides can be a valuable tool in improving the precision of planned femoral neck osteotomies, especially in minimally invasive hip surgery, where bony landmarks are often inaccessible. The aim of our study was to validate the accuracy of a novel patient specific femoral osteotomy guide for THR through a minimally invasive posterior approach, the direct superior approach (DSA). As part of our routine preoperative planning 30 patients underwent low dose CT scans of their arthritic hip. 3D printed patient specific femoral neck osteotomy guides were then produced. Intraoperatively, having cleared all soft tissue from the postero-lateral neck of the enlocated hip, the guide was placed and pinned onto the posterolateral femoral neck. The osteotomy was performed using an oscillating saw and the uncemented hip components were implanted as per routine. Postoperatively, the achieved level of the osteotomy at the medial calcar was compared with the planned level of resection using a 3D/2D matching analysis (Mimics X-ray module, Materialise, Belgium). A total of 30 patients undergoing uncemented Trinity™ acetabular and TriFit TS™ femoral component arthroplasty (Corin, UK) were included in our analysis. All but one of our analysed osteotomies were found to be within 3 mm from the planned height of osteotomy. In one patient the level of osteotomy deviated 5 mm below the planned level of resection. Preoperative planning and the use of patient specific osteotomy guides provides an accurate method of performing femoral neck osteotomies in minimally invasive hip arthroplasty using the direct superior approach. IV (Case series).

  4. Improvement of the thermal behavior of linear motors through insulation layer

    International Nuclear Information System (INIS)

    Eun, I. U.; Lee, C. M.; Chung, W. J.; Choi, Y. H.

    2001-01-01

    Linear motors can drive a linear motion without intermediate gears, screws or crank shafts. Linear motors can successfully replace ball lead screw in machine tools, because they have a high velocity, acceleration and good positioning accuracy. On the other hand, linear motors emit large amounts of heat and have low efficiency. In this paper, heat sources of a synchronous linear motor with high velocity and force are measured and analyzed. To improve the thermal stiffness of the linear motor, an insulation layer with low thermal conductivity is inserted between cooler and machine table. Some effects of the insulation layer are presented

  5. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  6. Direct linear driving systems; Les entrainements lineaires directs

    Energy Technology Data Exchange (ETDEWEB)

    Favre, E.; Brunner, C.; Piaget, D. [ETEL SA (France)

    1999-11-01

    The linear motor is one of the most important developments in electrical drive technology. However, it only, began to be adopted on a large scale at the beginning of the 1990's and will not be considered a mature technology until well into the next millennium. Actuators based on linear motor technology have a number of technical advantages including high speed, high positional accuracy and fine resolution. They also require fewer component parts. Some precautions are necessary when using linear motors. Care must be taken to avoid overheating and excessive vibration, and the magnetic components must be protected.

  7. Precision, accuracy and linearity of radiometer EML 105 whole blood metabolite biosensors.

    Science.gov (United States)

    Cobbaert, C; Morales, C; van Fessem, M; Kemperman, H

    1999-11-01

    The analytical performance of a new, whole blood glucose and lactate electrode system (EML 105 analyser. Radiometer Medical A/S. Copenhagen, Denmark) was evaluated. Between-day coefficients of variation were glucose and lactate, respectively. Recoveries of glucose were 100 +/- 10% using either aqueous or protein-based standards. Recoveries of lactate depended on the matrix, being underestimated in aqueous standards (approximately -10%) and 95-100% in standards containing 40 g/L albumin at lactate concentrations of 15 and 30 mmol/L. However, recoveries were high (up to 180%) at low lactate concentrations in protein-based standards. Carry-over, investigated according to National Clinical Chemistry Laboratory Standards EP10-T2, was negligible (alpha = 0.01). Glucose and lactate biosensors equipped with new membranes were linear up to 60 and 30 mmol/L, respectively. However, linearity fell upon daily use with increasing membrane lifetime. We conclude that the Radiometer metabolite biosensor results are reproducible and do not suffer from specimen-related carry-over. However, lactate recovery depends on the protein content and the lactate concentration.

  8. Evaluation and analytical validation of a handheld digital refractometer for urine specific gravity measurement

    Directory of Open Access Journals (Sweden)

    Sara P. Wyness

    2016-08-01

    Full Text Available Objectives: Refractometers are commonly used to determine urine specific gravity (SG in the assessment of hydration status and urine specimen validity testing. Few comprehensive performance evaluations are available demonstrating refractometer capability from a clinical laboratory perspective. The objective of this study was therefore to conduct an analytical validation of a handheld digital refractometer used for human urine SG testing. Design and methods: A MISCO Palm Abbe™ refractometer was used for all experiments, including device familiarization, carryover, precision, accuracy, linearity, analytical sensitivity, evaluation of potential substances which contribute to SG (i.e. “interference”, and reference interval evaluation. A manual refractometer, urine osmometer, and a solute score (sum of urine chloride, creatinine, glucose, potassium, sodium, total protein, and urea nitrogen; all in mg/dL were used as comparative methods for accuracy assessment. Results: Significant carryover was not observed. A wash step was still included as good laboratory practice. Low imprecision (%CV, <0.01 was demonstrated using low and high QC material. Accuracy studies showed strong correlation to manual refractometry. Linear correlation was also demonstrated between SG, osmolality, and solute score. Linearity of Palm Abbe performance was verified with observed error of ≤0.1%. Increases in SG were observed with increasing concentrations of albumin, creatinine, glucose, hemoglobin, sodium chloride, and urea. Transference of a previously published urine SG reference interval of 1.0020–1.0300 was validated. Conclusions: The Palm Abbe digital refractometer was a fast, simple, and accurate way to measure urine SG. Analytical validity was confirmed by the present experiments. Keywords: Specific gravity, Osmolality, Digital refractometry, Hydration, Sports medicine, Urine drug testing, Urine adulteration

  9. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Directory of Open Access Journals (Sweden)

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  10. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    International Nuclear Information System (INIS)

    Vaz, Carlos Eduardo Sanches; Camargo, Olavo Pires de; Santana, Paulo Jose de; Valezi, Antonio Carlos

    2005-01-01

    Purpose: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. Method: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage). Results: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. Conclusion: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is unable to clearly

  11. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    Directory of Open Access Journals (Sweden)

    Vaz Carlos Eduardo Sanches

    2005-01-01

    Full Text Available PURPOSE: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. METHOD: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage. RESULTS: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. CONCLUSION: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is

  12. 3D-Printed Linear Positioner with Micrometer Accuracy

    Directory of Open Access Journals (Sweden)

    Kuo Yin-Yen

    2017-01-01

    Full Text Available This article presents a positioner, whose flexure main body is made by a commercial 3D printer. Using this method, manufacturing a positioner can be cost efficient and much easier to customize. Integrating a laser displacement sensor, an electromagnetic actuator, and a feedback controller, this positioning system has 100 micron translational stroke with 1 micron resolution. Experiments also demonstrate sinusoidal motions at different frequencies. Using the method developed by this article, micro-positioners with customized specifications can be implemented rapidly, iteratively, and cost-effectively.

  13. Experimental study of sector and linear array ultrasound accuracy and the influence of navigated 3D-reconstruction as compared to MRI in a brain tumor model.

    Science.gov (United States)

    Siekmann, Max; Lothes, Thomas; König, Ralph; Wirtz, Christian Rainer; Coburger, Jan

    2018-03-01

    Currently, intraoperative ultrasound in brain tumor surgery is a rapidly propagating option in imaging technology. We examined the accuracy and resolution limits of different ultrasound probes and the influence of 3D-reconstruction in a phantom and compared these results to MRI in an intraoperative setting (iMRI). An agarose gel phantom with predefined gel targets was examined with iMRI, a sector (SUS) and a linear (LUS) array probe with two-dimensional images. Additionally, 3D-reconstructed sweeps in perpendicular directions were made of every target with both probes, resulting in 392 measurements. Statistical calculations were performed, and comparative boxplots were generated. Every measurement of iMRI and LUS was more precise than SUS, while there was no apparent difference in height of iMRI and 3D-reconstructed LUS. Measurements with 3D-reconstructed LUS were always more accurate than in 2D-LUS, while 3D-reconstruction of SUS showed nearly no differences to 2D-SUS in some measurements. We found correlations of 3D-reconstructed SUS and LUS length and width measurements with 2D results in the same image orientation. LUS provides an accuracy and resolution comparable to iMRI, while SUS is less exact than LUS and iMRI. 3D-reconstruction showed the potential to distinctly improve accuracy and resolution of ultrasound images, although there is a strong correlation with the sweep direction during data acquisition.

  14. A Revised Piecewise Linear Recursive Convolution FDTD Method for Magnetized Plasmas

    International Nuclear Information System (INIS)

    Liu Song; Zhong Shuangying; Liu Shaobin

    2005-01-01

    The piecewise linear recursive convolution (PLRC) finite-different time-domain (FDTD) method improves accuracy over the original recursive convolution (RC) FDTD approach and current density convolution (JEC) but retains their advantages in speed and efficiency. This paper describes a revised piecewise linear recursive convolution PLRC-FDTD formulation for magnetized plasma which incorporates both anisotropy and frequency dispersion at the same time, enabling the transient analysis of magnetized plasma media. The technique is illustrated by numerical simulations of the reflection and transmission coefficients through a magnetized plasma layer. The results show that the revised PLRC-FDTD method has improved the accuracy over the original RC FDTD method and JEC FDTD method

  15. Accuracy of fibula reconstruction using patient-specific CAD/CAM reconstruction plates and dental implants : A new modality for functional reconstruction of mandibular defects

    NARCIS (Netherlands)

    Schepers, Rutger H.; Raghoebar, Gerry M.; Vissink, Arjan; Stenekes, Martin W.; Kraeima, Joep; Roodenburg, Jan L.; Reintsema, Harry; Witjes, Max J.

    Background: The purpose of this study was to analyze the accuracy of mandibular reconstruction using patient-specific computer-aided designed and computer-aided manufactured (CAD/CAM) reconstruction plates as a guide to place fibula grafts and dental implants in a one-stage procedure using

  16. Multiple sequence alignment accuracy and phylogenetic inference.

    Science.gov (United States)

    Ogden, T Heath; Rosenberg, Michael S

    2006-04-01

    Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.

  17. Laser measuring scanners and their accuracy limits

    Science.gov (United States)

    Jablonski, Ryszard

    1993-09-01

    Scanning methods have gained the greater importance for some years now due to a short measuring time and wide range of application in flexible manufacturing processes. This paper is a summing up of the autho?s creative scientific work in the field of measuring scanners. The research conducted allowed to elaborate the optimal configurations of measuring systems based on the scanning method. An important part of the work was the analysis of a measuring scanner - as a transducer of an angle rotation into the linear displacement which resulted in obtaining its much higher accuracy and finally in working out a measuring scanner eliminating the use of an additional reference standard. The completion of the work is an attempt to determine an attainable accuracy limit of scanning measurement of both length and angle. Using a high stability deflector and a corrected scanning lens one can obtain the angle determination over 30 (or 2 mm) to an accuracy 0 (or 0 tm) when the measuring rate is 1000 Hz or the range d60 (4 mm) with accuracy 0 " (0 jim) and measurement frequency 6 Hz.

  18. Non-linear Dynamics of Speech in Schizophrenia

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Simonsen, Arndis; Weed, Ethan

    (regularity and complexity) of speech. Our aims are (1) to achieve a more fine-grained understanding of the speech patterns in schizophrenia than has previously been achieved using traditional, linear measures of prosody and fluency, and (2) to employ the results in a supervised machine-learning process......-effects inference. SANS and SAPS scores were predicted using a 10-fold cross-validated multiple linear regression. Both analyses were iterated 1000 to test for stability of results. Results: Voice dynamics allowed discrimination of patients with schizophrenia from healthy controls with a balanced accuracy of 85...

  19. Two linearization methods for atmospheric remote sensing

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    We present two linearization methods for a pseudo-spherical atmosphere and general viewing geometries. The first approach is based on an analytical linearization of the discrete ordinate method with matrix exponential and incorporates two models for matrix exponential calculation: the matrix eigenvalue method and the Pade approximation. The second method referred to as the forward-adjoint approach is based on the adjoint radiative transfer for a pseudo-spherical atmosphere. We provide a compact description of the proposed methods as well as a numerical analysis of their accuracy and efficiency.

  20. Linear finite element method for one-dimensional diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Michele A.; Dominguez, Dany S.; Iglesias, Susana M., E-mail: micheleabrandao@gmail.com, E-mail: dany@labbi.uesc.br, E-mail: smiglesias@uesc.br [Universidade Estadual de Santa Cruz (LCC/DCET/UESC), Ilheus, BA (Brazil). Departamento de Ciencias Exatas e Tecnologicas. Laboratorio de Computacao Cientifica

    2011-07-01

    We describe in this paper the fundamentals of Linear Finite Element Method (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)

  1. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    Science.gov (United States)

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  2. An extended sequence specificity for UV-induced DNA damage.

    Science.gov (United States)

    Chung, Long H; Murray, Vincent

    2018-01-01

    The sequence specificity of UV-induced DNA damage was determined with a higher precision and accuracy than previously reported. UV light induces two major damage adducts: cyclobutane pyrimidine dimers (CPDs) and pyrimidine(6-4)pyrimidone photoproducts (6-4PPs). Employing capillary electrophoresis with laser-induced fluorescence and taking advantages of the distinct properties of the CPDs and 6-4PPs, we studied the sequence specificity of UV-induced DNA damage in a purified DNA sequence using two approaches: end-labelling and a polymerase stop/linear amplification assay. A mitochondrial DNA sequence that contained a random nucleotide composition was employed as the target DNA sequence. With previous methodology, the UV sequence specificity was determined at a dinucleotide or trinucleotide level; however, in this paper, we have extended the UV sequence specificity to a hexanucleotide level. With the end-labelling technique (for 6-4PPs), the consensus sequence was found to be 5'-GCTC*AC (where C* is the breakage site); while with the linear amplification procedure, it was 5'-TCTT*AC. With end-labelling, the dinucleotide frequency of occurrence was highest for 5'-TC*, 5'-TT* and 5'-CC*; whereas it was 5'-TT* for linear amplification. The influence of neighbouring nucleotides on the degree of UV-induced DNA damage was also examined. The core sequences consisted of pyrimidine nucleotides 5'-CTC* and 5'-CTT* while an A at position "1" and C at position "2" enhanced UV-induced DNA damage. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  3. Picosecond, single pulse electron linear accelerator

    International Nuclear Information System (INIS)

    Kikuchi, Riichi; Kawanishi, Masaharu

    1979-01-01

    The picosecond, single pulse electron linear accelerators, are described, which were installed in the Nuclear Engineering Laboratory of the University of Tokyo and in the Nuclear Radiation Laboratory of the Osaka University. The purpose of the picosecond, single pulse electron linear accelerators is to investigate the very short time reaction of the substances, into which gamma ray or electron beam enters. When the electrons in substances receive radiation energy, the electrons get high kinetic energy, and the energy and the electric charge shift, at last to the quasi-stable state. This transient state can be experimented with these special accelerators very accurately, during picoseconds, raising the accuracy of the time of incidence of radiation and also raising the accuracy of observation time. The outline of these picosecond, single pulse electron linear accelerators of the University of Tokyo and the Osaka University, including the history, the systems and components and the output beam characteristics, are explained. For example, the maximum energy 30 -- 35 MeV, the peak current 1 -- 8 n C, the pulse width 18 -- 40 ps, the pulse repetition rate 200 -- 720 pps, the energy spectrum 1 -- 1.8% and the output beam diameter 2 -- 5 mm are shown as the output beam characteristics of the accelerators in both universities. The investigations utilizing the picosecond single pulse electron linear accelerators, such as the investigation of short life excitation state by pulsed radiation, the dosimetry study of pulsed radiation, and the investigation of the transforming mechanism and the development of the transforming technology from picosecond, single pulse electron beam to X ray, vacuum ultraviolet ray and visual ray, are described. (Nakai, Y.)

  4. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  5. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation.

    Science.gov (United States)

    Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F

    2016-03-01

    A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.

  6. Constrained non-linear waves for offshore wind turbine design

    International Nuclear Information System (INIS)

    Rainey, P J; Camp, T R

    2007-01-01

    Advancements have been made in the modelling of extreme wave loading in the offshore environment. We give an overview of wave models used at present, and their relative merits. We describe a method for embedding existing non-linear solutions for large, regular wave kinematics into linear, irregular seas. Although similar methods have been used before, the new technique is shown to offer advances in computational practicality, repeatability, and accuracy. NewWave theory has been used to constrain the linear simulation, allowing best possible fit with the large non-linear wave. GH Bladed was used to compare the effect of these models on a generic 5 MW turbine mounted on a tripod support structure

  7. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    Science.gov (United States)

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, pJ-difference editing method (adjusted R-squared: 0.908, pJ-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (pJ-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  8. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4 T

    Energy Technology Data Exchange (ETDEWEB)

    Neuberger, Ulf; Fischer, Manuel; Heiland, Sabine [Univ. of Heidelberg Medical Center, Heidelberg (Germany). Dept. of Neuroradiology; Univ. of Heidelberg Medical Center, Heidelberg (Germany). Div. of Experimental Radiology; Kickingereder, Philipp; Bendszus, Martin [Univ. of Heidelberg Medical Center, Heidelberg (Germany). Dept. of Neuroradiology; Helluy, Xavier [Bochum Univ. (Germany). Dept. of Biopsychology

    2017-07-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4 T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p < 0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p < 0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p < 0.0001). Moreover, with J-difference editing 2HG was discernible down to 1 mM, whereas with the PRESS method 2HG values were not discernable below 2 mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing.

  9. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  10. HPV genotype-specific concordance between EuroArray HPV, Anyplex II HPV28 and Linear Array HPV Genotyping test in Australian cervical samples.

    Science.gov (United States)

    Cornall, Alyssa M; Poljak, Marin; Garland, Suzanne M; Phillips, Samuel; Machalek, Dorothy A; Tan, Jeffrey H; Quinn, Michael A; Tabrizi, Sepehr N

    2017-12-01

    To compare human papillomavirus genotype-specific performance of two genotyping assays, Anyplex II HPV28 (Seegene) and EuroArray HPV (EuroImmun), with Linear Array HPV (Roche). DNA extracted from clinican-collected cervical brush specimens in PreservCyt medium (Hologic), from 403 women undergoing management for detected cytological abnormalities, was tested on the three assays. Genotype-specific agreement were assessed by Cohen's kappa statistic and Fisher's z-test of significance between proportions. Agreement between Linear Array and the other 2 assays was substantial to almost perfect (κ = 0.60 - 1.00) for most genotypes, and was almost perfect (κ = 0.81 - 0.98) for almost all high-risk genotypes. Linear Array overall detected most genotypes more frequently, however this was only statistically significant for HPV51 (EuroArray; p = 0.0497), HPV52 (Anyplex II; p = 0.039) and HPV61 (Anyplex II; p=0.047). EuroArray detected signficantly more HPV26 (p = 0.002) and Anyplex II detected more HPV42 (p = 0.035) than Linear Array. Each assay performed differently for HPV68 detection: EuroArray and LA were in moderate to substantial agreement with Anyplex II (κ = 0.46 and 0.62, respectively), but were in poor disagreement with each other (κ = -0.01). EuroArray and Anyplex II had similar sensitivity to Linear Array for most high-risk genotypes, with slightly lower sensitivity for HPV 51 or 52. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    Energy Technology Data Exchange (ETDEWEB)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  12. On the removal of boundary errors caused by Runge-Kutta integration of non-linear partial differential equations

    Science.gov (United States)

    Abarbanel, Saul; Gottlieb, David; Carpenter, Mark H.

    1994-01-01

    It has been previously shown that the temporal integration of hyperbolic partial differential equations (PDE's) may, because of boundary conditions, lead to deterioration of accuracy of the solution. A procedure for removal of this error in the linear case has been established previously. In the present paper we consider hyperbolic (PDE's) (linear and non-linear) whose boundary treatment is done via the SAT-procedure. A methodology is present for recovery of the full order of accuracy, and has been applied to the case of a 4th order explicit finite difference scheme.

  13. Recovery coefficients as a test of system linearity of response in PET

    International Nuclear Information System (INIS)

    Geworski, L.; Munz, D.L.; Knoop, B.; Hofmann, M.; Knapp, W.H.

    2002-01-01

    Aim: New imaging protocols have created an increasing demand for quantitation in dedicated PET. Besides attenuation and scatter correction the recovery correction, accounting for the instrument's limited spatial resolution, has gained importance. For clinical practicability these corrections should work independent from the object, i.e. from the actual distribution of emitter and absorber. Aim of the study was to test this object independency, i.e. system linearity of response, by comparing recovery coefficients (RC) determined for different object geometries. In fact, this comparison may serve as a final test on system linearity of response, as measured on the quantitative accuracy by which the activity concentration in small lesions can be recovered. Method: For hot and cold spot imaging situations spatial distribution of activity is different. Therefore, scatter correction algorithm has to deal with different scatter distributions. If all factors disturbing system linearity, specifically scatter and attenuation, are corrected to a sufficient degree of accuracy, the system behaves linearly resulting in the theoretical relationship. CSRC = (1-HSRC). Thus, this equation, applied hot and cold spot measurements, will serve as a test on the effectiveness of the corrections and, hence, as a test of system linearity of response. Following IEC standard procedures (IEC 61675-1) measurements were done with and without interplane septa (2D/3D) on an ECAT EXACT 922 using a cylindrical phantom containing six spheres of different diameters (10 mm - 40 mm). All data were corrected for attenuation (transmission scan) and scatter (2D: deconvolution, 3D: scatter model), as implemented in the scanner's standard software. Recovery coefficients were determined for cold (CSRC) and hot (HSRC) lesions using both 2D and 3D acquisition mode. Results: CSRC directly measured versus CSRC calculated according to eq. (1) from HSRC resulted in an excellent agreement for both 2D and 3D data

  14. Linear thermal expansion measurements on silicon from 6 to 340 K

    International Nuclear Information System (INIS)

    Lyon, K.G.; Salinger, G.L.; Swenson, C.A.; White, G.K.

    1977-01-01

    Linear thermal expansion measurements have been carried out from 6 to 340 K on a high-purity silicon sample using a linear absolute capacitance dilatometer. The accuracy of the measurements varies from +- 0.01 x 10 -8 K -1 at the lowest temperatures to +- 0.1 x 10 -8 K -1 or 0.1%, whichever is greater, near room temperature, and is sufficient to establish silicon as a thermal expansion standard for these temperatures. The agreement with previous data is satisfactory at low temperatures and excellent above room temperature where laser-interferometry data of comparable accuracy exist. Thermal expansions calculated from ultrasonic and heat-capacity data are preferred below 13 K where experimental problems occurred

  15. Force Characteristics of the H-module Linear Actuator with Varying Tooth-shift-distance

    DEFF Research Database (Denmark)

    Liu, Xiao; Chen, Zhe; Lu, Kaiyuan

    2013-01-01

    The large normal force of a single-sided linear actuator may cause vibration, noise and reduce the positioning accuracy. To overcome these disadvantages, a new H-module linear actuator (HMLA) is proposed to reduce effectively the normal force without using expensive air suspension system...

  16. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators.

    Science.gov (United States)

    Kim, Pil-Jong; Kim, Hong-Gee; Cho, Byeong-Hoon

    2015-05-01

    The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL). The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC) in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.

  17. Accuracy of a computed tomography scanning procedure to manufacture digital models.

    NARCIS (Netherlands)

    Darroudi, A.M.; Kuijpers-Jagtman, A.M.; Ongkosuwito, E.M.; Suttorp, C.M.; Bronkhorst, E.M.; Breuning, K.H.

    2017-01-01

    INTRODUCTION: Accurate articulation of the digital dental casts is crucial in orthodontic diagnosis and treatment planning. We aimed to determine the accuracy of manufacturing digital dental casts from computed tomography scanning of plaster casts regarding linear dimensions and interarch

  18. HPV genotype-specific concordance between EuroArray HPV, Anyplex II HPV28 and Linear Array HPV Genotyping test in Australian cervical samples

    Directory of Open Access Journals (Sweden)

    Alyssa M. Cornall

    2017-12-01

    Full Text Available Purpose: To compare human papillomavirus genotype-specific performance of two genotyping assays, Anyplex II HPV28 (Seegene and EuroArray HPV (EuroImmun, with Linear Array HPV (Roche. Methods: DNA extracted from clinican-collected cervical brush specimens in PreservCyt medium (Hologic, from 403 women undergoing management for detected cytological abnormalities, was tested on the three assays. Genotype-specific agreement were assessed by Cohen's kappa statistic and Fisher's z-test of significance between proportions. Results: Agreement between Linear Array and the other 2 assays was substantial to almost perfect (κ = 0.60 − 1.00 for most genotypes, and was almost perfect (κ = 0.81 – 0.98 for almost all high-risk genotypes. Linear Array overall detected most genotypes more frequently, however this was only statistically significant for HPV51 (EuroArray; p = 0.0497, HPV52 (Anyplex II; p = 0.039 and HPV61 (Anyplex II; p=0.047. EuroArray detected signficantly more HPV26 (p = 0.002 and Anyplex II detected more HPV42 (p = 0.035 than Linear Array. Each assay performed differently for HPV68 detection: EuroArray and LA were in moderate to substantial agreement with Anyplex II (κ = 0.46 and 0.62, respectively, but were in poor disagreement with each other (κ = −0.01. Conclusions: EuroArray and Anyplex II had similar sensitivity to Linear Array for most high-risk genotypes, with slightly lower sensitivity for HPV 51 or 52. Keywords: Human papillomavirus, Genotyping, Linear Array, Anyplex II, EuroArray, Cervix

  19. Local Ray-Based Traveltime Computation Using the Linearized Eikonal Equation

    KAUST Repository

    Almubarak, Mohammed S.

    2013-05-01

    The computation of traveltimes plays a critical role in the conventional implementations of Kirchhoff migration. Finite-difference-based methods are considered one of the most effective approaches for traveltime calculations and are therefore widely used. However, these eikonal solvers are mainly used to obtain early-arrival traveltime. Ray tracing can be used to pick later traveltime branches, besides the early arrivals, which may lead to an improvement in velocity estimation or in seismic imaging. In this thesis, I improved the accuracy of the solution of the linearized eikonal equation by constructing a linear system of equations (LSE) based on finite-difference approximation, which is of second-order accuracy. The ill-conditioned LSE is initially regularized and subsequently solved to calculate the traveltime update. Numerical tests proved that this method is as accurate as the second-order eikonal solver. Later arrivals are picked using ray tracing. These traveltimes are binned to the nearest node on a regular grid and empty nodes are estimated by interpolating the known values. The resulting traveltime field is used as an input to the linearized eikonal algorithm, which improves the accuracy of the interpolated nodes and yields a local ray-based traveltime. This is a preliminary study and further investigation is required to test the efficiency and the convergence of the solutions.

  20. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  1. ONETEP: linear-scaling density-functional theory with plane-waves

    International Nuclear Information System (INIS)

    Haynes, P D; Mostof, A A; Skylaris, C-K; Payne, M C

    2006-01-01

    This paper provides a general overview of the methodology implemented in onetep (Order-N Electronic Total Energy Package), a parallel density-functional theory code for largescale first-principles quantum-mechanical calculations. The distinctive features of onetep are linear-scaling in both computational effort and resources, obtained by making well-controlled approximations which enable simulations to be performed with plane-wave accuracy. Titanium dioxide clusters of increasing size designed to mimic surfaces are studied to demonstrate the accuracy and scaling of onetep

  2. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  3. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  4. Bayes linear statistics, theory & methods

    CERN Document Server

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  5. Evaluating measurement accuracy a practical approach

    CERN Document Server

    Rabinovich, Semyon G

    2013-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. From developing the theory of indirect measurements to proposing new methods of reduction, transformation, and enumeration, this work encompasses the full range of measurement data processing. It includes many examples that illustrate the application of general theory to typical problems encountered in measurement practice. As a result, the book serves as an inclusive reference work for data processing of all types of measurements: single and multiple, combined and simultaneous, direct (both linear and nonlinear), and indirect (both dependent and independent). It is a working tool for experimental scientists and engineers of all disciplines who work with instrumentation. It is also a good resource for natural science and engineering students and for technicians performing measurements in industry. A key feature of the book is...

  6. Accuracy of a reformulated fast-set vinyl polysiloxane impression material using dual-arch trays.

    Science.gov (United States)

    Kang, Alex H; Johnson, Glen H; Lepe, Xavier; Wataha, John C

    2009-05-01

    A common technique used for making crown impressions involves use of a vinyl polysiloxane impression material in combination with a dual-arch tray. A leading dental manufacturer has reformulated its vinyl polysiloxane (VPS) impression line, but the accuracy of the new material has not been verified. The purpose of this study was to assess the accuracy of reformulated VPS impression materials using the single-step dual-arch impression technique. Dual-arch impressions were made on a typodont containing a master stainless steel standard crown preparation die, from which gypsum working dies were formed, recovered, and measured. The impression materials evaluated were Imprint 3 Penta Putty with Quick Step Regular Body (IP-0); Imprint 3 Penta Quick Step Heavy Body with Quick Step Light Body (IP-1); Aquasil Ultra Rigid Fast Set with LV Fast Set (AQ-1); and Aquasil Ultra Heavy Fast Set with XLV Fast Set (AQ-2) (n=10). All impressions were disinfected with CaviCide spray for 10 minutes prior to pouring with type IV gypsum. Buccolingual (BL), mesiodistal (MD), and occlusogingival (OG) dimensions were measured and compared to the master die using an optical measuring microscope. Linear dimensional change was also assessed for IP-0 and AQ-1 at 1 and 24 hours based on ANSI/ADA Specification No. 19. Single-factor ANOVA with Dunnett's T3 multiple comparisons was used to compare BL, MD, and OG changes, with hypothesis testing at alpha=.05. A repeated-measures ANOVA was used to compare linear dimensional changes. There were statistical differences among the 4 impression systems for 3 of 4 dimensions of the master die. IP-0 working dies were significantly larger in MD and OG-L dimensions but significantly smaller in the BL dimension. IP-1 working dies were significantly smaller in the BL dimension compared to the master die. With the exception of IP-0, differences detected were small and clinically insignificant. No significant differences were observed for linear dimensional change

  7. Quality control methods for linear accelerator radiation and mechanical axes alignment.

    Science.gov (United States)

    Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A

    2018-06-01

    The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis

  8. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  9. 68Ga-PSMA PET/CT in Patients with Rising Prostatic-Specific Antigen After Definitive Treatment of Prostate Cancer: Detection Efficacy and Diagnostic accuracy.

    Science.gov (United States)

    Hamed, Maged Abdel Galil; Basha, Mohammad Abd Alkhalik; Ahmed, Hussien; Obaya, Ahmed Ali; Afifi, Amira Hamed Mohamed; Abdelbary, Eman H

    2018-06-20

    68 Ga-prostate-specific membrane antigen-11 ( 68 Ga-PSMA-11) is a recently developed positron emission tomography (PET) tracer that can detect prostate cancer (PC) relapses and metastases with high contrast resolution. The aim of this study was to assess the detection efficacy and diagnostic accuracy of 68 Ga-PSMA PET/CT image in patients with rising prostatic-specific antigen (PSA) after treatment of PC. The present prospective study included 188 patients who exhibited rising of PSA level on a routine follow-up examination after definitive treatment of PC. All patients underwent a 68 Ga-PSMA PET/CT examination. For each patient, we determined the disease stage, the Gleason score, and the maximum standardized uptake value of the local recurrence and extraprostatic metastases. The detection efficacy and diagnostic accuracy of 68 Ga-PSMA PET/CT were established by histopathology and clinical and imaging follow-up as the reference standards. 68 Ga-PSMA PET/CT detected tumour relapse in 165 patients (35 patients had local recurrence, 106 patients had extraprostatic metastases, and 24 patients had combined lesions). The sensitivity, specificity, and accuracy values of 68 Ga-PSMA PET/CT examination in the detection of PC recurrence were 98.8%, 100%, and 98.8%, respectively. 68 Ga-PSMA PET/CT revealed an overall detection rate of 87.8% (165/188) in patients with rising PSA (median of 2.2 ng/mL, and range of 0.01-70 ng/mL). 68 Ga-PSMA PET/CT is a valuable tool for the detection of PC local recurrence or extraprostatic metastases following rising PSA levels after primary definitive therapy and should be incorporated during routine work-up. Copyright © 2018. Published by Elsevier Inc.

  10. Modelling Loudspeaker Non-Linearities

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  11. Cavity characterization for general use in linear electron accelerators

    International Nuclear Information System (INIS)

    Souza Neto, M.V. de.

    1985-01-01

    The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt

  12. On the low-temperature specific heat of icosahedral and decagonal quasicrystals

    International Nuclear Information System (INIS)

    Chernikov, M.A.

    2005-01-01

    Calorimetric experiments on icosahedral (Al-Re-Pd, Al-Mn-Pd) and decagonal (Al-Cu-Co, Al-Ni-Co) quasicrystals are described. For quasicrystals of both classes, the coefficient γ of the linear term to the specific heat falls into the range of 0.1-0.6 mJ/g-atom K 2 indicating a low density of energy states at Fermi level. For icosahedral Al-Mn-Pd, the cubic-in-temperature term to the specific heat is distinctly larger than the estimated contribution of long-wave acoustic excitations. On the contrary, the magnitude of the cubic-in-temperature term to the specific heat of decagonal Al-Ni-Co is in agreement,within the experimental accuracy, with the Debye acoustic contribution from the results of low-temperature measurements of the elastic modules [ru

  13. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  14. Behavioral modeling of the dominant dynamics in input-output transfer of linear(ized) circuits

    NARCIS (Netherlands)

    Beelen, T.G.J.; Maten, ter E.J.W.; Sihaloho, H.J.; Eijndhoven, van S.J.L.

    2010-01-01

    We present a powerful procedure for determining both the dominant dynamics of the inputoutput transfer and the corresponding most influential circuit parameters of a linear(ized) circuit. The procedure consists of several steps in which a specific (sub)problem is solved and its solution is used in

  15. Wavefront Sensing for WFIRST with a Linear Optical Model

    Science.gov (United States)

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  16. Evaluation of electrical impedance ratio measurements in accuracy of electronic apex locators

    Directory of Open Access Journals (Sweden)

    Pil-Jong Kim

    2015-05-01

    Full Text Available Objectives The aim of this paper was evaluating the ratios of electrical impedance measurements reported in previous studies through a correlation analysis in order to explicit it as the contributing factor to the accuracy of electronic apex locator (EAL. Materials and Methods The literature regarding electrical property measurements of EALs was screened using Medline and Embase. All data acquired were plotted to identify correlations between impedance and log-scaled frequency. The accuracy of the impedance ratio method used to detect the apical constriction (APC in most EALs was evaluated using linear ramp function fitting. Changes of impedance ratios for various frequencies were evaluated for a variety of file positions. Results Among the ten papers selected in the search process, the first-order equations between log-scaled frequency and impedance were in the negative direction. When the model for the ratios was assumed to be a linear ramp function, the ratio values decreased if the file went deeper and the average ratio values of the left and right horizontal zones were significantly different in 8 out of 9 studies. The APC was located within the interval of linear relation between the left and right horizontal zones of the linear ramp model. Conclusions Using the ratio method, the APC was located within a linear interval. Therefore, using the impedance ratio between electrical impedance measurements at different frequencies was a robust method for detection of the APC.

  17. Predicting birth weight with conditionally linear transformation models.

    Science.gov (United States)

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  18. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    International Nuclear Information System (INIS)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A.; Caloba, L.P.; Mery, D.

    2004-01-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  19. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  20. Increased-accuracy numerical modeling of electron-optical systems with space-charge

    International Nuclear Information System (INIS)

    Sveshnikov, V.

    2011-01-01

    This paper presents a method for improving the accuracy of space-charge computation for electron-optical systems. The method proposes to divide the computational region into two parts: a near-cathode region in which analytical solutions are used and a basic one in which numerical methods compute the field distribution and trace electron ray paths. A numerical method is used for calculating the potential along the interface, which involves solving a non-linear equation. Preliminary results illustrating the improvement of accuracy and the convergence of the method for a simple test example are presented.

  1. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  2. Key Performance Indicators and Analysts' Earnings Forecast Accuracy: An Application of Content Analysis

    OpenAIRE

    Alireza Dorestani; Zabihollah Rezaee

    2011-01-01

    We examine the association between the extent of change in key performance indicator (KPI) disclosures and the accuracy of forecasts made by analysts. KPIs are regarded as improving both the transparency and relevancy of public financial information. The results of using linear regression models show that contrary to our prediction and the hypothesis of this paper, there is no significant association between the change in non- financial KPI disclosures and the accuracy of analysts' forecasts....

  3. Diagnostic accuracy of sonography for pleural effusion: systematic review

    Directory of Open Access Journals (Sweden)

    Alexandre Grimberg

    Full Text Available CONTEXT AND OBJECTIVE: The initial method for evaluating the presence of pleural effusion was chest radiography. Isolated studies have shown that sonography has greater accuracy than radiography for this diagnosis; however, no systematic reviews on this matter are available in the literature. Thus, the aim of this study was to evaluate the accuracy of sonography in detecting pleural effusion, by means of a systematic review of the literature. DESIGN AND SETTING: This was a systematic review with meta-analysis on accuracy studies. This study was conducted in the Department of Diagnostic Imaging and in the Brazilian Cochrane Center, Discipline of Emergency Medicine and Evidence-Based Medicine, Department of Medicine, Universidade Federal de São Paulo (Unifesp, São Paulo, Brazil. METHOD: The following databases were searched: Cochrane Library, Medline, Web of Science, Embase and Literatura Latino-Americana e do Caribe em Ciências da Saúde (Lilacs. The references of relevant studies were also screened for additional citations of interest. Studies in which the accuracy of sonography for detecting pleural effusion was tested, with an acceptable reference standard (computed tomography or thoracic drainage, were included. RESULTS: Four studies were included. All of them showed that sonography had high sensitivity, specificity and accuracy for detecting pleural effusions. The mean sensitivity was 93% (95% confidence interval, CI: 89% to 96%, and specificity was 96% (95% CI: 95% to 98%. CONCLUSIONS: In different populations and clinical settings, sonography showed consistently high sensitivity, specificity and accuracy for detecting fluid in the pleural space.

  4. Linear ubiquitination signals in adaptive immune responses.

    Science.gov (United States)

    Ikeda, Fumiyo

    2015-07-01

    Ubiquitin can form eight different linkage types of chains using the intrinsic Met 1 residue or one of the seven intrinsic Lys residues. Each linkage type of ubiquitin chain has a distinct three-dimensional topology, functioning as a tag to attract specific signaling molecules, which are so-called ubiquitin readers, and regulates various biological functions. Ubiquitin chains linked via Met 1 in a head-to-tail manner are called linear ubiquitin chains. Linear ubiquitination plays an important role in the regulation of cellular signaling, including the best-characterized tumor necrosis factor (TNF)-induced canonical nuclear factor-κB (NF-κB) pathway. Linear ubiquitin chains are specifically generated by an E3 ligase complex called the linear ubiquitin chain assembly complex (LUBAC) and hydrolyzed by a deubiquitinase (DUB) called ovarian tumor (OTU) DUB with linear linkage specificity (OTULIN). LUBAC linearly ubiquitinates critical molecules in the TNF pathway, such as NEMO and RIPK1. The linear ubiquitin chains are then recognized by the ubiquitin readers, including NEMO, which control the TNF pathway. Accumulating evidence indicates an importance of the LUBAC complex in the regulation of apoptosis, development, and inflammation in mice. In this article, I focus on the role of linear ubiquitin chains in adaptive immune responses with an emphasis on the TNF-induced signaling pathways. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Diagnostic Accuracy of a Self-Report Measure of Patellar Tendinopathy in Youth Basketball.

    Science.gov (United States)

    Owoeye, Oluwatoyosi B A; Wiley, J Preston; Walker, Richard E A; Palacios-Derflingher, Luz; Emery, Carolyn A

    2018-04-27

    Study Design Prospective diagnostic accuracy validation study. Background Engaging clinicians for diagnosis of patellar tendinopathy in large surveillance studies is often impracticable. A self-report measure, the Oslo Sports Research Trauma Centre patellar tendinopathy (OSTRC-P) Questionnaire, an adaptation of the OSTRC Questionnaire may provide a viable alternative. Objectives To evaluate the diagnostic accuracy of the OSTRC-P Questionnaire in detecting patellar tendinopathy in youth basketball players when compared to clinical evaluation. Methods Following the Standards for Reporting of Diagnostic Accuracy Studies guidelines, 208 youth basketball players (aged 13-18 years) were recruited. Participants completed the OSTRC-P Questionnaire (index test) prior to a clinical evaluation (reference standard) by a physiotherapist blinded to OSTRC-P Questionnaire results. Sensitivity, specificity, predictive values (PVs), likelihood ratios (LRs) and posttest probabilities were calculated. Linear regression was used to examine the association between OSTRC-P Questionnaire severity score and patellar tendinopathy severity rating during single leg decline squat (SLDS). Results The final analysis included 169 players. The OSTRC-P Questionnaire had a sensitivity of 79% (95%CI: 65%, 90%), specificity of 98% (95%CI: 94%, 100%), positive PV of 95%, negative PV of 92%, positive LR of 48 and negative LR of 0.21. The posttest probabilities were 95% and 8% given positive and negative results, respectively. A positive association was found between OSTRC-P Questionnaire and SLDS rating [(β = .08 (95%CI: .03, .12) (p = .001)]. Conclusions The OSTRC-P Questionnaire is an acceptable alternative to clinical evaluation for self-reporting patellar tendinopathy and grading its severity in settings involving youth basketball players. Level of Evidence Diagnosis, level 1b. J Orthop Sports Phys Ther, Epub 27 Apr 2018. doi:10.2519/jospt.2018.8088.

  6. Verification of Accuracy of CyberKnife Tumor-tracking Radiation Therapy Using Patient-specific Lung Phantoms

    International Nuclear Information System (INIS)

    Jung, Jinhong; Song, Si Yeol; Yoon, Sang Min; Kwak, Jungwon; Yoon, KyoungJun; Choi, Wonsik; Jeong, Seong-Yun; Choi, Eun Kyung; Cho, Byungchul

    2015-01-01

    Purpose: To investigate the accuracy of the CyberKnife Xsight Lung Tracking System (XLTS) compared with that of a fiducial-based target tracking system (FTTS) using patient-specific lung phantoms. Methods and Materials: Three-dimensional printing technology was used to make individualized lung phantoms that closely mimicked the lung anatomy of actual patients. Based on planning computed tomographic data from 6 lung cancer patients who underwent stereotactic ablative radiation therapy using the CyberKnife, the volume above a certain Hounsfield unit (HU) was assigned as the structure to be filled uniformly with polylactic acid material by a 3-dimensional printer (3D Edison, Lokit, Korea). We evaluated the discrepancies between the measured and modeled target positions, representing the total tracking error, using 3 log files that were generated during each treatment for both the FTTS and the XLTS. We also analyzed the γ index between the film dose measured under the FTTS and XLTS. Results: The overall mean values and standard deviations of total tracking errors for the FTTS were 0.36 ± 0.39 mm, 0.15 ± 0.64 mm, and 0.15 ± 0.62 mm for the craniocaudal (CC), left–right (LR), and anteroposterior (AP) components, respectively. Those for the XLTS were 0.38 ± 0.54 mm, 0.13 ± 0.18 mm, and 0.14 ± 0.37 mm for the CC, LR, and AP components, respectively. The average of γ passing rates was 100% for the criteria of 3%, 3 mm; 99.6% for the criteria of 2%, 2 mm; and 86.8% for the criteria of 1%, 1 mm. Conclusions: The XLTS has segmentation accuracy comparable with that of the FTTS and small total tracking errors

  7. Verification of Accuracy of CyberKnife Tumor-tracking Radiation Therapy Using Patient-specific Lung Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Jinhong [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Department of Radiation Oncology, Kyung Hee University Medical Center, Kyung Hee University School of Medicine, Seoul (Korea, Republic of); Song, Si Yeol, E-mail: coocoori@gmail.com [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Yoon, Sang Min; Kwak, Jungwon; Yoon, KyoungJun [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Wonsik [Department of Radiation Oncology, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung (Korea, Republic of); Jeong, Seong-Yun [Asan Institute for Life Science, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Eun Kyung; Cho, Byungchul [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of)

    2015-07-15

    Purpose: To investigate the accuracy of the CyberKnife Xsight Lung Tracking System (XLTS) compared with that of a fiducial-based target tracking system (FTTS) using patient-specific lung phantoms. Methods and Materials: Three-dimensional printing technology was used to make individualized lung phantoms that closely mimicked the lung anatomy of actual patients. Based on planning computed tomographic data from 6 lung cancer patients who underwent stereotactic ablative radiation therapy using the CyberKnife, the volume above a certain Hounsfield unit (HU) was assigned as the structure to be filled uniformly with polylactic acid material by a 3-dimensional printer (3D Edison, Lokit, Korea). We evaluated the discrepancies between the measured and modeled target positions, representing the total tracking error, using 3 log files that were generated during each treatment for both the FTTS and the XLTS. We also analyzed the γ index between the film dose measured under the FTTS and XLTS. Results: The overall mean values and standard deviations of total tracking errors for the FTTS were 0.36 ± 0.39 mm, 0.15 ± 0.64 mm, and 0.15 ± 0.62 mm for the craniocaudal (CC), left–right (LR), and anteroposterior (AP) components, respectively. Those for the XLTS were 0.38 ± 0.54 mm, 0.13 ± 0.18 mm, and 0.14 ± 0.37 mm for the CC, LR, and AP components, respectively. The average of γ passing rates was 100% for the criteria of 3%, 3 mm; 99.6% for the criteria of 2%, 2 mm; and 86.8% for the criteria of 1%, 1 mm. Conclusions: The XLTS has segmentation accuracy comparable with that of the FTTS and small total tracking errors.

  8. Identification of lung cancer with high sensitivity and specificity by blood testing

    Directory of Open Access Journals (Sweden)

    Stephan Bernhard

    2010-02-01

    Full Text Available Abstract Background Lung cancer is a very frequent and lethal tumor with an identifiable risk population. Cytological analysis and chest X-ray failed to reduce mortality, and CT screenings are still controversially discussed. Recent studies provided first evidence for the potential usefulness of autoantigens as markers for lung cancer. Methods We used extended panels of arrayed antigens and determined autoantibody signatures of sera from patients with different kinds of lung cancer, different common non-tumor lung pathologies, and controls without any lung disease by a newly developed computer aided image analysis procedure. The resulting signatures were classified using linear kernel Support Vector Machines and 10-fold cross-validation. Results The novel approach allowed for discriminating lung cancer patients from controls without any lung disease with a specificity of 97.0%, a sensitivity of 97.9%, and an accuracy of 97.6%. The classification of stage IA/IB tumors and controls yielded a specificity of 97.6%, a sensitivity of 75.9%, and an accuracy of 92.9%. The discrimination of lung cancer patients from patients with non-tumor lung pathologies reached an accuracy of 88.5%. Conclusion We were able to separate lung cancer patients from subjects without any lung disease with high accuracy. Furthermore, lung cancer patients could be seprated from patients with other non-tumor lung diseases. These results provide clear evidence that blood-based tests open new avenues for the early diagnosis of lung cancer.

  9. Investigating circular patterns in linear polarization observations of Venus

    NARCIS (Netherlands)

    Mahapatra, G.; Stam, D.M.; Rossi, L.C.G.; Rodenhuis, M.; Snik, Frans; Keller, C.U.

    2017-01-01

    In this work, we analyse linear polarization data of the planet at a distance, obtained with the Extreme Polarimeter (ExPo) on the William Herschel Telescope
    on La Palma. These spatially resolved, high-accuracy polarization observations of Venus show faint circular patterns centered on the

  10. Non Linear Beam Dynamics Studies at SPEAR

    International Nuclear Information System (INIS)

    Terebilo, A.; Pellegrini, C.; Cornacchia, M.; Corbett, J.; Martin, D.

    2011-01-01

    The frequency map analysis of a Hamiltonian system recently introduced to accelerators physics in combination with turn-by-turn phase space measurements opens new experimental opportunities for studying non linear dynamic in storage rings. In this paper we report on the experimental program at SPEAR having the goal of measuring the frequency map of the machine. In this paper we discuss the accuracy of the instantaneous tune extraction from experimental data and demonstrate the possibility of the frequency map measurement. The instantaneous tune extraction technique can be applied to experimental tracking data with reasonable accuracy. Frequency map can be experimentally determined using the existing turn-by-turn phase space measurement techniques and NAFF instantaneous tune extraction.

  11. Improvement of linear reactivity methods and application to long range fuel management

    International Nuclear Information System (INIS)

    Woehlke, R.A.; Quan, B.L.

    1982-01-01

    The original development of the linear reactivity theory assumes flat burnup, batch by batch. The validity of this assumption is explored using multicycle burnup data generated with a detailed 3-D SIMULATE model. The results show that the linear reactivity method can be improved by correcting for batchwise power sharing. The application of linear reactivity to long range fuel management is demonstrated in several examples. Correcting for batchwise power sharing improves the accuracy of the analysis. However, with regard to the sensitivity of fuel cost to changes in various parameters, the corrected and uncorrected linear reactivity theories give remarkably similar results

  12. Stability and complexity of small random linear systems

    Science.gov (United States)

    Hastings, Harold

    2010-03-01

    We explore the stability of the small random linear systems, typically involving 10-20 variables, motivated by dynamics of the world trade network and the US and Canadian power grid. This report was prepared as an account of work sponsored by an agency of the US Government. Neither the US Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the US Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the US Government or any agency thereof.

  13. linear-quadratic-linear model

    Directory of Open Access Journals (Sweden)

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  14. Accuracy increase of the coordinate measurement based on the model production of geometrical parts specifications

    Science.gov (United States)

    Zlatkina, O. Yu

    2018-04-01

    There is a relationship between the service properties of component parts and their geometry; therefore, to predict and control the operational characteristics of parts and machines, it is necessary to measure their geometrical specifications. In modern production, a coordinate measuring machine is the advanced measuring instrument of the products geometrical specifications. The analysis of publications has shown that during the coordinate measurements the problems of choosing locating chart of parts and coordination have not been sufficiently studied. A special role in the coordination of the part is played by the coordinate axes informational content. Informational content is the sum of the degrees of freedom limited by the elementary item of a part. The coordinate planes of a rectangular coordinate system have different informational content (three, two, and one). The coordinate axes have informational content of four, two and zero. The higher the informational content of the coordinate plane or axis, the higher its priority for reading angular and linear coordinates is. The geometrical model production of the coordinate measurements object taking into account the information content of coordinate planes and coordinate axes allows us to clearly reveal the interrelationship of the coordinates of the deviations in location, sizes and deviations of their surfaces shape. The geometrical model helps to select the optimal locating chart of parts for bringing the machine coordinate system to the part coordinate system. The article presents an algorithm the model production of geometrical specifications using the example of the piston rod of a compressor.

  15. Gauss Elimination: Workhorse of Linear Algebra.

    Science.gov (United States)

    1995-08-05

    linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also

  16. Stability of Linear Equations--Algebraic Approach

    Science.gov (United States)

    Cherif, Chokri; Goldstein, Avraham; Prado, Lucio M. G.

    2012-01-01

    This article could be of interest to teachers of applied mathematics as well as to people who are interested in applications of linear algebra. We give a comprehensive study of linear systems from an application point of view. Specifically, we give an overview of linear systems and problems that can occur with the computed solution when the…

  17. Simulations of pulsating one-dimensional detonations with true fifth order accuracy

    International Nuclear Information System (INIS)

    Henrick, Andrew K.; Aslam, Tariq D.; Powers, Joseph M.

    2006-01-01

    A novel, highly accurate numerical scheme based on shock-fitting coupled with fifth order spatial and temporal discretizations is applied to a classical unsteady detonation problem to generate solutions with unprecedented accuracy. The one-dimensional reactive Euler equations for a calorically perfect mixture of ideal gases whose reaction is described by single-step irreversible Arrhenius kinetics are solved in a series of calculations in which the activation energy is varied. In contrast with nearly all known simulations of this problem, which converge at a rate no greater than first order as the spatial and temporal grid is refined, the present method is shown to converge at a rate consistent with the fifth order accuracy of the spatial and temporal discretization schemes. This high accuracy enables more precise verification of known results and prediction of heretofore unknown phenomena. To five significant figures, the scheme faithfully recovers the stability boundary, growth rates, and wave-numbers predicted by an independent linear stability theory in the stable and weakly unstable regime. As the activation energy is increased, a series of period-doubling events are predicted, and the system undergoes a transition to chaos. Consistent with general theories of non-linear dynamics, the bifurcation points are seen to converge at a rate for which the Feigenbaum constant is 4.66 ± 0.09, in close agreement with the true value of 4.669201... As activation energy is increased further, domains are identified in which the system undergoes a transition from a chaotic state back to one whose limit cycles are characterized by a small number of non-linear oscillatory modes. This result is consistent with behavior of other non-linear dynamical systems, but not typically considered in detonation dynamics. The period and average detonation velocity are calculated for a variety of asymptotically stable limit cycles. The average velocity for such pulsating detonations is found

  18. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  19. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  20. Geometric modeling in the problem of ball bearing accuracy

    Science.gov (United States)

    Glukhov, V. I.; Pushkarev, V. V.; Khomchenko, V. G.

    2017-06-01

    The manufacturing quality of ball bearings is an urgent problem for machine-building industry. The aim of the research is to improve the geometric specifications accuracy of bearings based on evidence-based systematic approach and the method of adequate size, location and form deviations modeling of the rings and assembled ball bearings. The present work addressed the problem of bearing geometric specifications identification and the study of these specifications. The deviation from symmetric planar of rings and bearings assembly and mounting width are among these specifications. A systematic approach to geometric specifications values and ball bearings tolerances normalization in coordinate systems will improve the quality of bearings by optimizing and minimizing the number of specifications. The introduction of systematic approach to the international standards on rolling bearings is a guarantee of a significant increase in accuracy of bearings and the quality of products where they are applied.

  1. Accuracy of recommended sampling and assay methods for the determination of plasma-free and urinary fractionated metanephrines in the diagnosis of pheochromocytoma and paraganglioma: a systematic review.

    Science.gov (United States)

    Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme

    2017-06-01

    To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p sampling compared with 24-h urine, 95 vs. 90% (p sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.

  2. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  3. Anthropomorphic Phantoms for Confirmation of Linear Accelerator-Based Small Animal Irradiation.

    Science.gov (United States)

    Perks, Julian R; Lucero, Steven; Monjazeb, Arta M; Li, Jian Jian

    2015-03-01

    Three dimensional (3D) scanning and printing technology is utilized to create phantom models of mice in order to assess the accuracy of ionizing radiation dosing from a clinical, human-based linear accelerator. Phantoms are designed to simulate a range of research questions, including irradiation of lung tumors and primary subcutaneous or orthotopic tumors for immunotherapy experimentation. The phantoms are used to measure the accuracy of dose delivery and then refine it to within 1% of the prescribed dose.

  4. Online correction of scanning probe microscopes with pixel accuracy

    DEFF Research Database (Denmark)

    Dirscherl, Kai

    2000-01-01

    of 10 nm and an opening angle of 30.0 °. Even atomic resolution can be achieved. The scan movement of the tip is not linear however. This is caused by the propelling device of the SPM for the scan motion - a piezoelectric ceramic. The two major non-linear responses o f the piezo to the applied control....... The algorithm typically contains 5 - 7 parameters which have to be calibrated manually. Still, non-linear errors remain in the order of 1-2%. One pixel in a 512x 512 image corresponds to 0.2% per direction. This goal of measurement accuracy i s reached with the algorithm developed in this thesis. Three...... different SPM are analyzed for their non-linearity. Two commercial tube sc anners are applied with a maximum scan range in x and y of 40.0 µm and 160.0 µm as well as one specially designed stack scanner with a maximum range of 5.0 µm. For the tube scanners, a 1-dimensional line pattern with a reference...

  5. Gain scheduled linear quadratic control for quadcopter

    Science.gov (United States)

    Okasha, M.; Shah, J.; Fauzi, W.; Hanouf, Z.

    2017-12-01

    This study exploits the dynamics and control of quadcopters using Linear Quadratic Regulator (LQR) control approach. The quadcopter’s mathematical model is derived using the Newton-Euler method. It is a highly manoeuvrable, nonlinear, coupled with six degrees of freedom (DOF) model, which includes aerodynamics and detailed gyroscopic moments that are often ignored in many literatures. The linearized model is obtained and characterized by the heading angle (i.e. yaw angle) of the quadcopter. The adopted control approach utilizes LQR method to track several reference trajectories including circle and helix curves with significant variation in the yaw angle. The controller is modified to overcome difficulties related to the continuous changes in the operating points and eliminate chattering and discontinuity that is observed in the control input signal. Numerical non-linear simulations are performed using MATLAB and Simulink to illustrate to accuracy and effectiveness of the proposed controller.

  6. Radiation load of workers on linear accelerators

    International Nuclear Information System (INIS)

    Kralik, G.; Kantova, Z.; Fribertova, M.; Kontrisova, K.

    2014-01-01

    Burden of health care personnel working on linear accelerators. New examination and treatment methods enable to reduce the number of health care personnel even in the case of increased numbers of examined patients. However, still open is the question of determining the effective dose delivered to health care personnel. The employment of several methods of evaluation of received dose at one workplace makes it possible to compare the accuracy and reliability of the respective types of measuring devices, as well as to point out the pitfalls of their use.At the St. Elizabeth Cancer Institute we compared the results of measurements of TL dosimeters, and OSL dosimeters at workplaces with linear accelerators. (authors)

  7. Method of Increasing Identification Accuracy under Experimental Tests of Dynamic Objects

    Directory of Open Access Journals (Sweden)

    Y. N. Pavlov

    2015-01-01

    Full Text Available The work concerns a problem of increasing identification accuracy of linear dynamic systems on the basis of experimental data obtained by applying test signals to the system.The work is aimed at considering a possibility to use the experimentally obtained hodograph counting to determine parameters of this system in a specific context of the linear dynamic system of the second order.An offer was to use a method of harmonious linearization and a described cut method.The type of frequency transfer function of the identified system was assumed as known.It was supposed that when obtaining the frequency characteristics of a real system there are disturbances interfering with experiment as a result of which points of experimentally received hodograph are random displaced.An identification problem solution was searched in a class of the hodograph set by the system model, which had the same type of frequency transfer function, as the type of frequency transfer function of the identified system.The unknown coefficients of frequency transfer function of the system model were searched through minimizing a proximity criterion (measure of the experimentally received hodograph of the system and of the system model hodograph over the entire aggregate of points. One of the authors described this criterion in the earlier publication.The solution to a problem of nonlinear dynamic system identification by the frequency hodograph was reduced to the solution of the system of equations of the rather unknown linear parameters of frequency transfer function of the system model.The program to simulate a process of the pseudo-experimental data, containing random errors, and determine parameters of this system is developed for a dynamic system of the second order.A conducted computing experiment is conducted to estimate an error at which the offered algorithm defines the values of parameters of this system.

  8. Characterization and specificity of the linear epitope of the enterovirus 71 VP2 protein

    Directory of Open Access Journals (Sweden)

    Kiener Tanja K

    2012-02-01

    Full Text Available Abstract Background Enterovirus 71 (EV71 has emerged as a major causative agent of hand, foot and mouth disease in the Asia-Pacific region over the last decade. Hand, foot and mouth disease can be caused by different etiological agents from the enterovirus family, mainly EV71 and coxsackieviruses, which are genetically closely related. Nevertheless, infection with EV71 may occasionally lead to high fever, neurologic complications and the emergence of a rapidly fatal syndrome of pulmonary edema associated with brainstem encephalitis. The rapid progression and high mortality of severe EV71 infection has highlighted the need for EV71-specific diagnostic and therapeutic tools. Monoclonal antibodies are urgently needed to specifically detect EV71 antigens from patient specimens early in the infection process. Furthermore, the elucidation of viral epitopes will contribute to the development of targeted therapeutics and vaccines. Results We have identified the monoclonal antibody 7C7 from a screen of hybridoma cells derived from mice immunized with the EV71-B5 strain. The linear epitope of 7C7 was mapped to amino acids 142-146 (EDSHP of the VP2 capsid protein and was characterized in detail. Mutational analysis of the epitope showed that the aspartic acid to asparagine mutation of the EV71 subgenogroup A (BrCr strain did not interfere with antibody recognition. In contrast, the serine to threonine mutation at position 144 of VP2, present in recently emerged EV71-C4 China strains, abolished antigenicity. Mice injected with this virus strain did not produce any antibodies against the VP2 protein. Immunofluorescence and Western blotting confirmed that 7C7 specifically recognized EV71 subgenogroups and did not cross-react to Coxsackieviruses 4, 6, 10, and 16. 7C7 was successfully used as a detection antibody in an antigen-capture ELISA assay. Conclusions Detailed mapping showed that the VP2 protein of Enterovirus 71 contains a single, linear, non

  9. Patient-specific guides do not improve accuracy in total knee arthroplasty: a prospective randomized controlled trial.

    Science.gov (United States)

    Victor, Jan; Dujardin, Jan; Vandenneucker, Hilde; Arnout, Nele; Bellemans, Johan

    2014-01-01

    Recently, patient-specific guides (PSGs) have been introduced, claiming a significant improvement in accuracy and reproducibility of component positioning in TKA. Despite intensive marketing by the manufacturers, this claim has not yet been confirmed in a controlled prospective trial. We (1) compared three-planar component alignment and overall coronal mechanical alignment between PSG and conventional instrumentation and (2) logged the need for applying changes in the suggested position of the PSG. In this randomized controlled trial, we enrolled 128 patients. In the PSG cohort, surgical navigation was used as an intraoperative control. When the suggested cut deviated more than 3° from target, the use of PSG was abandoned and marked as an outlier. When cranial-caudal position or size was adapted, the PSG was marked as modified. All patients underwent long-leg standing radiography and CT scan. Deviation of more than 3° from the target in any plane was defined as an outlier. The PSG and conventional cohorts showed similar numbers of outliers in overall coronal alignment (25% versus 28%; p = 0.69), femoral coronal alignment (7% versus 14%) (p = 0.24), and femoral axial alignment (23% versus 17%; p = 0.50). There were more outliers in tibial coronal (15% versus 3%; p = 0.03) and sagittal 21% versus 3%; p = 0.002) alignment in the PSG group than in the conventional group. PSGs were abandoned in 14 patients (22%) and modified in 18 (28%). PSGs do not improve accuracy in TKA and, in our experience, were somewhat impractical in that the procedure needed to be either modified or abandoned with some frequency.

  10. What is the Best Model Specification and Earth Observation Product for Predicting Regional Grain Yields in Food Insecure Countries?

    Science.gov (United States)

    Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.

    2017-12-01

    We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.

  11. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship.

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab's software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, pvelocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key pointsSome commercial devices allow to estimate 1 RM from the force-velocity relationship.These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription.Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations.

  12. Diagnostic accuracy of MRCP in choledocholithiasis

    International Nuclear Information System (INIS)

    Guarise, Alessandro; Mainardi, Paride; Baltieri, Susanna; Faccioli, Niccolo'

    2005-01-01

    Purpose: To evaluate the accuracy of MRCP in diagnosing choledocholithiasis considering Endoscopic Retrograde Cholangiopancreatography (ERCP) as the gold standard. To compare the results achieved during the first two years of use (1999-2000) of Magnetic Resonance Cholangiopancreatography (MRCP) in patients with suspected choledocholithiasis with those achieved during the following two years (2001-2002) in order to establish the repeatability and objectivity of MRCP results. Materials and methods: One hundred and seventy consecutive patients underwent MRCP followed by ERCP within 72 h. In 22/170 (13%) patients ERCP was unsuccessful for different reasons. MRCP was performed using a 1.5 T magnet with both multi-slice HASTE sequences and thick-slice projection technique. Choledocholithiasis was diagnosed in the presence of signal void images in the dependent portion of the duct surrounded by hyperintense bile and detected at least in two projections. The MRCP results, read independently from the ERCP results, were compared in two different and subsequent periods. Results: ERCP confirmed choledocholithiasis in 87 patients. In these cases the results of MRCP were the following: 78 true positives, 53 true negatives, 7 false positives, and 9 false negatives. The sensitivity, specificity and accuracy were 90%, 88% and 89%, respectively. After the exclusion of stones with diameters smaller than 6 mm, the sensitivity, specificity and accuracy were 100%, 99% and 99%, respectively. MRCP accuracy was related to the size of the stones. There was no significant statistical difference between the results obtained in the first two-year period and those obtained in the second period. Conclusions: MRCP i sufficiently accurate to replace ERCP in patients with suspected choledocholithiasis. The results are related to the size of stones. The use of well-defined radiological signs allows good diagnostic accuracy independent of the learning curve [it

  13. On the accuracy and efficiency of finite difference solutions for nonlinear waves

    DEFF Research Database (Denmark)

    Bingham, Harry B.

    2006-01-01

    -uniform grid. Time-integration is performed using a fourth-order Runge-Kutta scheme. The linear accuracy, stability and convergence properties of the method are analyzed in two-dimensions, and high-order schemes with a stretched vertical grid are found to be advantageous relative to second-order schemes...... on an even grid. Comparison with highly accurate periodic solutions shows that these conclusions carry over to nonlinear problems. The combination of non-uniform grid spacing in the vertical and fourth-order schemes is suggested as providing an optimal balance between accuracy and complexity for practical...

  14. Simulation of a medical linear accelerator for teaching purposes.

    Science.gov (United States)

    Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco

    2015-05-08

    Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.

  15. Sensitivity, specificity, predictive value and accuracy of ultrasonography in pregnancy rate prediction in Sahelian goats after progesterone impregnated sponge synchronization

    Directory of Open Access Journals (Sweden)

    Justin Kouamo

    2014-09-01

    Full Text Available Aim: This study was aimed to evaluate the sensitivity, specificity, predictive value and accuracy of ultrasonography in pregnancy rate (PR prediction in Sahelian goats after progesterone impregnated sponge synchronization within the framework of caprine artificial insemination (AI program in Fatick (Senegal. Materials and Methods: Of 193 candidate goats in AI program, 167 were selected (day 50 in six villages. Estrus was synchronized by progesterone impregnated sponges installed for 11 days. Two days before the time of sponge removal (day 4, each goat was treated with 500 IU of equine chorionic gonadotropin and 50 μg of dcloprostenol. All goats were inseminated (day 0 with alpine goat semen from France at 45±3 h after sponge removal (day 2. Real-time B-mode ultrasonography was performed at day 50, day 13, day 0, day 40 and day 60 post-AI. Results: Selection rate, estrus response rate, AI rate, PR at days 40 and days 60 were 86.53%; 71.85%; 83.34%; 51% and 68% (p<0.05 respectively. Value of sensitivity, specificity, positive and negative predictive value, accuracy, total conformity, conformity of correct positive, conformity of correct negative and discordance of pregnancy diagnosis by trans-abdominal ultrasonography (TU were 98.03%; 63.26%; 73.52%; 3.12%; 81%; 81%; 50%; 31% and 19%, respectively. Conclusion: These results indicate that the TU can be performed in goats under traditional condition and emphasized the importance of re-examination of goats with negative or doubtful TU diagnoses performed at day 40 post-AI.

  16. Noiseless Vlasov–Poisson simulations with linearly transformed particles

    Energy Technology Data Exchange (ETDEWEB)

    Campos Pinto, Martin, E-mail: campos@ann.jussieu.fr [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Sonnendrücker, Eric, E-mail: sonnen@math.unistra.fr [IRMA, UMR 7501, Université de Strasbourg and CNRS, 7 rue René Descartes, F-67084 Strasbourg Cedex (France); Project-team CALVI, INRIA Nancy Grand Est, 7 rue René Descartes, F-67084 Strasbourg Cedex (France); Friedman, Alex, E-mail: af@llnl.gov [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Grote, David P., E-mail: grote1@llnl.gov [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lund, Steve M., E-mail: smlund@llnl.gov [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-10-15

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development of Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.

  17. A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.

    Science.gov (United States)

    Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua

    2017-01-01

    Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Linear variable differential transformer sensor using glass-covered amorphous wires as active core

    International Nuclear Information System (INIS)

    Chiriac, H.; Hristoforou, E.; Neagu, Maria; Pieptanariu, M.

    2000-01-01

    Results concerning linear variable differential transformer (LVDT) displacement sensor using as movable core glass-covered amorphous wires are presented. The LVDT response is linear for a displacement of the movable core up to about 14 mm, with an accuracy of 1 μm. LVDT using glass-covered amorphous wire as an active core presents a high sensitivity and good mechanical and corrosion resistance

  19. Effects of gamma radiation from 60Co on dilute aqueous solutions of Linear Alkyl Sulfonate Surfactants and other organic pollutants

    International Nuclear Information System (INIS)

    Rohrer, D.M.

    1975-01-01

    This study is the result of research findings and operational experiences gained by the author in over four years of work associated with the use of 60 Co for the treatment of waste-water. The effects of 60 Co are discussed with regard to radiochemical destruction of specific organic pollutant species. The study deals specifically with the effects of gamma radiation from a 30,000 Ci 60 Co source upon aqueous solutions of Linear Alkyl Sulfonate Surfactants. The new Linear Alkyl Sulfonate (LAS) Surfactants, the major surfactant produced in the United States of America since June 1965, was developed to replace the old Alkyl Benzene Sulfonate (ABS) Surfactants. The reason for the removal of Alkyl Benzene Sulfonate Surfactants was their extreme environmental stability and the associated appearance of foam in waste-water treatment plants and receiving streams. Although the Linear Alkyl Sulfonate Surfactants are considered 'bio-degradable', the time required for 'bio-degradation' is impractical within the present environmental guidelines. This led to research into alternate techniques of treatment for the destruction of Linear Alkyl Sulfonate Surfactants. Consideration is also given to similar effects of gamma radiation upon pesticides and to the practical aspects of the use of gamma radiation for the treatment of waste-water. Included are discussions of the general experimental procedures used, the sources and their calibration, and sampling techniques to ensure the accuracy of the data. (author)

  20. Electrodynamic linear motor

    Energy Technology Data Exchange (ETDEWEB)

    Munehiro, H

    1980-05-29

    When driving the carriage of a printer through a rotating motor, there are problems regarding the limited accuracy of the carriage position due to rotation or contraction and ageing of the cable. In order to solve the problem, a direct drive system was proposed, in which the printer carriage is driven by a linear motor. If one wants to keep the motor circuit of such a motor compact, then the magnetic flux density in the air gap must be reduced or the motor travel must be reduced. It is the purpose of this invention to create an electrodynamic linear motor, which on the one hand is compact and light and on the other hand has a relatively high constant force over a large travel. The invention is characterised by the fact that magnetic fields of alternating polarity are generated at equal intervals in the magnetic field, and that the coil arrangement has 2 adjacent coils, whose size corresponds to half the length of each magnetic pole. A logic circuit is provided to select one of the two coils and to determine the direction of the current depending on the signals of a magnetic field sensor on the coil arrangement.

  1. Role of interoceptive accuracy in topographical changes in emotion-induced bodily sensations

    Science.gov (United States)

    Jung, Won-Mo; Ryu, Yeonhee; Lee, Ye-Seul; Wallraven, Christian; Chae, Younbyoung

    2017-01-01

    The emotion-associated bodily sensation map is composed of a specific topographical distribution of bodily sensations to categorical emotions. The present study investigated whether or not interoceptive accuracy was associated with topographical changes in this map following emotion-induced bodily sensations. This study included 31 participants who observed short video clips containing emotional stimuli and then reported their sensations on the body map. Interoceptive accuracy was evaluated with a heartbeat detection task and the spatial patterns of bodily sensations to specific emotions, including anger, fear, disgust, happiness, sadness, and neutral, were visualized using Statistical Parametric Mapping (SPM) analyses. Distinct patterns of bodily sensations were identified for different emotional states. In addition, positive correlations were found between the magnitude of sensation in emotion-specific regions and interoceptive accuracy across individuals. A greater degree of interoceptive accuracy was associated with more specific topographical changes after emotional stimuli. These results suggest that the awareness of one’s internal bodily states might play a crucial role as a required messenger of sensory information during the affective process. PMID:28877218

  2. Photon caliper to achieve submillimeter positioning accuracy

    Science.gov (United States)

    Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan

    2017-09-01

    The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.

  3. Linear Temporal Logic-based Mission Planning

    Directory of Open Access Journals (Sweden)

    Anil Kumar

    2016-06-01

    Full Text Available In this paper, we describe the Linear Temporal Logic-based reactive motion planning. We address the problem of motion planning for mobile robots, wherein the goal specification of planning is given in complex environments. The desired task specification may consist of complex behaviors of the robot, including specifications for environment constraints, need of task optimality, obstacle avoidance, rescue specifications, surveillance specifications, safety specifications, etc. We use Linear Temporal Logic to give a representation for such complex task specification and constraints. The specifications are used by a verification engine to judge the feasibility and suitability of plans. The planner gives a motion strategy as output. Finally a controller is used to generate the desired trajectory to achieve such a goal. The approach is tested using simulations on the LTLMoP mission planning tool, operating over the Robot Operating System. Simulation results generated using high level planners and low level controllers work simultaneously for mission planning and controlling the physical behavior of the robot.

  4. Linear Simulations of the Cylindrical Richtmyer-Meshkov Instability in Hydrodynamics and MHD

    KAUST Repository

    Gao, Song

    2013-05-01

    The Richtmyer-Meshkov instability occurs when density-stratified interfaces are impulsively accelerated, typically by a shock wave. We present a numerical method to simulate the Richtmyer-Meshkov instability in cylindrical geometry. The ideal MHD equations are linearized about a time-dependent base state to yield linear partial differential equations governing the perturbed quantities. Convergence tests demonstrate that second order accuracy is achieved for smooth flows, and the order of accuracy is between first and second order for flows with discontinuities. Numerical results are presented for cases of interfaces with positive Atwood number and purely azimuthal perturbations. In hydrodynamics, the Richtmyer-Meshkov instability growth of perturbations is followed by a Rayleigh-Taylor growth phase. In MHD, numerical results indicate that the perturbations can be suppressed for sufficiently large perturbation wavenumbers and magnetic fields.

  5. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Science.gov (United States)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  6. High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A

    International Nuclear Information System (INIS)

    J. Denard; A. Saha; G. Lavessiere

    2001-01-01

    CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described

  7. Accuracy of Carotid Duplex Criteria in Diagnosis of Significant Carotid Stenosis in Asian Patients.

    Science.gov (United States)

    Dharmasaroja, Pornpatr A; Uransilp, Nattaphol; Watcharakorn, Arvemas; Piyabhan, Pritsana

    2018-03-01

    Extracranial carotid stenosis can be diagnosed by velocity criteria of carotid duplex. Whether they are accurately applied to define severity of internal carotid artery (ICA) stenosis in Asian patients needs to be proved. The purpose of this study was to evaluate the accuracy of 2 carotid duplex velocity criteria in defining significant carotid stenosis. Carotid duplex studies and magnetic resonance angiography were reviewed. Criteria 1 was recommended by the Society of Radiologists in Ultrasound; moderate stenosis (50%-69%): peak systolic velocity (PSV) 125-230 cm/s, diastolic velocity (DV) 40-100 cm/s; severe stenosis (>70%): PSV greater than 230 cm/s, DV greater than 100 cm/s. Criteria 2 used PSV greater than 140 cm/s, DV less than 110 cm/s to define moderate stenosis (50%-75%) and PSV greater than 140 cm/s, DV greater than 110 cm/s for severe stenosis (76%-95%). A total of 854 ICA segments were reviewed. There was moderate stenosis in 72 ICAs, severe stenosis in 50 ICAs, and occlusion in 78 ICAs. Criteria 2 had slightly lower sensitivity, whereas higher specificity and accuracy than criteria 1 were observed in detecting moderate stenosis (criteria 1: sensitivity 95%, specificity 83%, accuracy 84%; criteria 2: sensitivity 92%, specificity 92%, and accuracy 92%). However, in detection of severe ICA stenosis, no significant difference in sensitivity, specificity, and accuracy was found (criteria 1: sensitivity 82%, specificity 99.57%, accuracy 98%; criteria 2: sensitivity 86%, specificity 99.68%, and accuracy 99%). In the subgroup of moderate stenosis, the criteria using ICA PSV greater than 140 cm/s had higher specificity and accuracy than the criteria using ICA PSV 125-230 cm/s. However, there was no significant difference in detection of severe stenosis or occlusion of ICA. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  8. Linear-after-the-exponential polymerase chain reaction and allied technologies. Real-time detection strategies for rapid, reliable diagnosis from single cells.

    Science.gov (United States)

    Pierce, Kenneth E; Wangh, Lawrence J

    2007-01-01

    Accurate detection of gene sequences in single cells is the ultimate challenge to polymerase chain reaction (PCR) sensitivity. Unfortunately, commonly used conventional and real-time PCR techniques are often too unreliable at that level to provide the accuracy needed for clinical diagnosis. Here we provide details of linear-after-the-exponential-PCR (LATE-PCR), a method similar to asymmetric PCR in the use of primers at different concentrations, but with novel design criteria to ensure high efficiency and specificity. Compared with conventional PCR, LATE-PCR increases the signal strength and allele discrimination capability of oligonucleotide probes such as molecular beacons and reduces variability among replicate samples. The analysis of real-time kinetics of LATE-PCR signals provides a means for improving the accuracy of single cell genetic diagnosis.

  9. Balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2013-01-01

    In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...

  10. Analysis of spatial distribution of land cover maps accuracy

    Science.gov (United States)

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  11. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    Science.gov (United States)

    Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman

    2017-07-01

    This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  12. Fourth order Douglas implicit scheme for solving three dimension reaction diffusion equation with non-linear source term

    Directory of Open Access Journals (Sweden)

    Shahid Hasnain

    2017-07-01

    Full Text Available This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.

  13. Statistical monitoring of linear antenna arrays

    KAUST Repository

    Harrou, Fouzi

    2016-11-03

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.

  14. A simplified multi-particle model for lithium ion batteries via a predictor-corrector strategy and quasi-linearization

    International Nuclear Information System (INIS)

    Li, Xiaoyu; Fan, Guodong; Rizzoni, Giorgio; Canova, Marcello; Zhu, Chunbo; Wei, Guo

    2016-01-01

    The design of a simplified yet accurate physics-based battery model enables researchers to accelerate the processes of the battery design, aging analysis and remaining useful life prediction. In order to reduce the computational complexity of the Pseudo Two-Dimensional mathematical model without sacrificing the accuracy, this paper proposes a simplified multi-particle model via a predictor-corrector strategy and quasi-linearization. In this model, a predictor-corrector strategy is used for updating two internal states, especially used for solving the electrolyte concentration approximation to reduce the computational complexity and reserve a high accuracy of the approximation. Quasi-linearization is applied to the approximations of the Butler-Volmer kinetics equation and the pore wall flux distribution to predict the non-uniform electrochemical reaction effects without using any nonlinear iterative solver. Simulation and experimental results show that the isothermal model and the model coupled with thermal behavior are greatly improve the computational efficiency with almost no loss of accuracy. - Highlights: • A simplified multi-particle model with high accuracy and computation efficiency is proposed. • The electrolyte concentration is solved based on a predictor-corrector strategy. • The non-uniform electrochemical reaction is solved based on quasi-linearization. • The model is verified by simulations and experiments at various operating conditions.

  15. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection.

    Science.gov (United States)

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-09-01

    Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer's instructions. Dimensional changes were measured before and after different disinfection procedures. Dentsply aquasil showed smallest dimensional change (-0.0046%) and impregum penta soft highest linear dimensional changes (-0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method.

  16. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    Science.gov (United States)

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  17. The Q* Index: A Useful Global Measure of Dementia Screening Test Accuracy

    Directory of Open Access Journals (Sweden)

    A.J. Larner

    2015-06-01

    Full Text Available Background/Aims: Single, global or unitary, indicators of test diagnostic performance have intuitive appeal for clinicians. The Q* index, the point in receiver operating characteristic (ROC curve space closest to the ideal top left-hand corner and where test sensitivity and specificity are equal, is one such measure. Methods: Datasets from four pragmatic accuracy studies which examined the Mini-Mental State Examination, Addenbrooke's Cognitive Examination-Revised, Montreal Cognitive Assessment, Test Your Memory test, and Mini-Addenbrooke's Cognitive Examination were examined to calculate and compare the Q* index, the maximal correct classification accuracy, and the maximal Youden index, as well as the sensitivity and specificity at these cutoffs. Results: Tests ranked similarly for the Q* index and the area under the ROC curve (AUC ROC. The Q* index cutoff was more sensitive (and less specific than the maximal correct classification accuracy cutoff, and less sensitive (and more specific than the maximal Youden index cutoff. Conclusion: The Q* index may be a useful global parameter summarising the test accuracy of cognitive screening instruments, facilitating comparison between tests, and defining a possible test cutoff value. As the point of equal sensitivity and specificity, its use may be more intuitive and appealing for clinicians than AUC ROC.

  18. Accuracy of genomic selection in European maize elite breeding populations.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  19. Dose delivery verification and accuracy assessment of stereotaxy in stereotactic radiotherapy and radiosurgery

    International Nuclear Information System (INIS)

    Pelagade, S.M.; Bopche, T.T.; Namitha, K.; Munshi, M.; Bhola, S.; Sharma, H.; Patel, B.K.; Vyas, R.K.

    2008-01-01

    The outcome of stereotactic radiotherapy (SRT) and stereotactic radiosurgery (SRS) in both benign and malignant tumors within the cranial region highly depends on precision in dosimetry, dose delivery and the accuracy assessment of stereotaxy associated with the unit. The frames BRW (Brown-Roberts-Wells) and GTC (Gill- Thomas-Cosman) can facilitate accurate patient positioning as well as precise targeting of tumours. The implementation of this technique may result in a significant benefit as compared to conventional therapy. As the target localization accuracy is improved, the demand for treatment planning accuracy of a TPS is also increased. The accuracy of stereotactic X Knife treatment planning system has two components to verify: (i) the dose delivery verification and the accuracy assessment of stereotaxy; (ii) to ensure that the Cartesian coordinate system associated is well established within the TPS for accurate determination of a target position. Both dose delivery verification and target positional accuracy affect dose delivery accuracy to a defined target. Hence there is a need to verify these two components in quality assurance protocol. The main intention of this paper is to present our dose delivery verification procedure using cylindrical wax phantom and accuracy assessment (target position) of stereotaxy using Geometric Phantom on Elekta's Precise linear accelerator for stereotactic installation

  20. COMSAT: Residue contact prediction of transmembrane proteins based on support vector machines and mixed integer linear programming.

    Science.gov (United States)

    Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A

    2016-03-01

    In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.

  1. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers

    Energy Technology Data Exchange (ETDEWEB)

    Treuer, H. [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany). E-mail: h.treuer at uni-koeln.de; Hoevels, M.; Luyken, K.; Gierich, A.; Sturm, V. [Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Cologne (Germany); Kocher, M.; Mueller, R.-P. [Department of Radiotherapy, University of Cologne, Cologne (Germany)

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems. (author)

  2. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers

    International Nuclear Information System (INIS)

    Treuer, H.; Kocher, M.; Mueller, R.-P.

    2000-01-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems. (author)

  3. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    Energy Technology Data Exchange (ETDEWEB)

    Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.; Li, H.; Siddiqui, M. S.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health System, 2799 West Brand Boulevard, Detroit, Michigan 48202 (United States); Scheib, S. G.; Schmelzer, P. [Varian Medical System, Täfernstrasse 7, Dättwil AG 5405 (Switzerland)

    2016-05-15

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2 mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.

  4. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    International Nuclear Information System (INIS)

    Wen, N.; Snyder, K. C.; Qin, Y.; Li, H.; Siddiqui, M. S.; Chetty, I. J.; Scheib, S. G.; Schmelzer, P.

    2016-01-01

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2 mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.

  5. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    International Nuclear Information System (INIS)

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  6. Structure formation with massive neutrinos. Going beyond linear theory

    International Nuclear Information System (INIS)

    Blas, Diego; Garny, Mathias; Konstandin, Thomas; Lesgourgues, Julien; Institut de Theorie Phenomenes Physiques EPFL, Lausanne; Savoie Univ., CNRS, Annecy-le-Vieux

    2014-08-01

    We compute non-linear corrections to the matter power spectrum taking the time- and scale-dependent free-streaming length of neutrinos into account. We adopt a hybrid scheme that matches the full Boltzmann hierarchy to an effective two-fluid description at an intermediate redshift. The non-linearities in the neutrino component are taken into account by using an extension of the time-flow framework. We point out that this remedies a spurious behaviour that occurs when neglecting non-linear terms for neutrinos. This behaviour is related to how efficiently short modes decouple from long modes and can be traced back to the violation of momentum conservation if neutrinos are treated linearly. Furthermore, we compare our results at next to leading order to various other methods and quantify the accuracy of the fluid description. Due to the correct decoupling behaviour of short modes, the two-fluid scheme is a suitable starting point to compute higher orders in perturbations or for resummation methods.

  7. Structure formation with massive neutrinos: going beyond linear theory

    CERN Document Server

    Blas, Diego; Konstandin, Thomas; Lesgourgues, Julien

    2014-01-01

    We compute non-linear corrections to the matter power spectrum taking the time- and scale-dependent free-streaming length of neutrinos into account. We adopt a hybrid scheme that matches the full Boltzmann hierarchy to an effective two-fluid description at an intermediate redshift. The non-linearities in the neutrino component are taken into account by using an extension of the time-flow framework. We point out that this remedies a spurious behaviour that occurs when neglecting non-linear terms for neutrinos. This behaviour is related to how efficiently short modes decouple from long modes and can be traced back to the violation of momentum conservation if neutrinos are treated linearly. Furthermore, we compare our results at next to leading order to various other methods and quantify the accuracy of the fluid description. Due to the correct decoupling behaviour of short modes, the two-fluid scheme is a suitable starting point to compute higher orders in perturbations or for resummation methods.

  8. Modelling of a linear PM machine including magnetic saturation and end effects : maximum force to current ratio

    NARCIS (Netherlands)

    Polinder, H.; Slootweg, J.G.; Hoeijmakers, M.J.; Compter, J.C.

    2003-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of their high force density, robustness and accuracy. These linear PM motors are often heavily loaded during short intervals of high acceleration, so that magnetic saturation occurs. This paper

  9. Remote handling dynamical modelling: assessment on new approach to enhance positioning accuracy with heavy load manipulation

    International Nuclear Information System (INIS)

    Gagarina-Sasia, T.; David, O.; Dubus, G.; Perrot, Y.; Riwain, A.

    2007-01-01

    In vessel maintenance work in Fusion Tokamak will be carried out with help several sets of robotic devices. Heavy loads handling in constrained space is identified by all players of the RH community as a key-issue in the latest Fusion Tokamak facilities. To deal with high-level dexterity tasks, high payload to mass ratio and limited operating space, RH equipment designers can only propose systems whose mechanical flexibility is no longer negligible and need to be taken into account in the control scheme. Traditional approaches where control system only includes a linear model of deformation of the structure leads to poor positioning accuracy. Uncontrolled or under evaluated errors could be damaging for in-vessel components during maintenance operations in the Tokamak facility. To address the control of complex flexible systems, we will investigate the use of specific mechanical software that combines both finite element and kinematical joints analyses, with a strong-coupled formulation, to perform system dynamics simulations. This procedure will be applied on a single axis mock up robotic joint with highly flexible structure. A comparison of experimental results with the traditional linear approach and the specified software model will be carried out. Benefits introduced by this new approach will finally be assessed in view of RH design or specification in the field of RH in Fusion Tokamak scale such as ITER. (orig.)

  10. Power calculation of linear and angular incremental encoders

    Science.gov (United States)

    Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.

    2016-04-01

    Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and

  11. Accuracy of implant impressions without impression copings: a three-dimensional analysis.

    Science.gov (United States)

    Kwon, Joo-Hyun; Son, Yong-Ha; Han, Chong-Hyun; Kim, Sunjai

    2011-06-01

    Implant impressions without impression copings can be used for cement-retained implant restorations. A comparison of the accuracy of implant impressions with and without impression copings is needed. The purpose of this study was to evaluate and compare the dimensional accuracy of implant definitive casts that are fabricated by implant impressions with and without impression copings. An acrylic resin maxillary model was fabricated, and 3 implant replicas were secured in the right second premolar, first, and second molars. Two impression techniques were used to fabricate definitive casts (n=10). For the coping group (Group C), open tray impression copings were used for the final impressions. For the no-coping group (Group NC), cementable abutments were connected to the implant replicas, and final impressions were made assuming the abutments were prepared teeth. Computerized calculation of the centroids and long axes of the implant or stone abutment replicas was performed. The Mann-Whitney U test analyzed the amount of linear and rotational distortion between groups (α =.05). At the first molar site, Group NC showed significantly greater linear distortion along the Y-axis, with a small difference between the groups (Group C, 7.8 ± 7.4 μm; Group NC, 19.5 ± 12.2). At the second molar site, increased distortion was noted in Group NC for every linear and rotational variable, except for linear distortion along the Z-axis. Implant impression with open tray impression copings produced more accurate definitive casts than those fabricated without impression copings, especially those with greater inter-abutment distance. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  12. Hybrid Spectral Unmixing: Using Artificial Neural Networks for Linear/Non-Linear Switching

    Directory of Open Access Journals (Sweden)

    Asmau M. Ahmed

    2017-07-01

    Full Text Available Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1 The mixing process should occur at macroscopic level and (2 Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model. Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms.

  13. Linear ubiquitination in immunity.

    Science.gov (United States)

    Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning

    2015-07-01

    Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.

  14. Acceptance test for the linear motion actuator for the scanning slit of the HIE-ISOLDE short diagnostic boxes

    CERN Document Server

    Cantero, E D; Bravin, E; Sosa, A

    2014-01-01

    We performed experimental tests to characterize the mechanical accuracy of a linear actuator designed by the company AVS for the movement of the scanning slit of the HIE-ISOLDE short diagnostic boxes. The mechanism consists of a linear actuator composed of two guiding rods and a lead screw, with a full stroke of 135 mm. A specially designed blade was mounted on the actuator and the transverse positioning of the blade was monitored with a camera-based optical system while moving the actuator at speeds of up to 10 mm/s. The repeatability of the positioning of the blade after several cycles around predefined positions was also measured. The results of the measurements and a general inspection of the device show that the proposed solution fulfils the specifications. A full prototype of short diagnostic box for the HIE-ISOLDE project can now be built for testing.

  15. Accuracy Assessment and Analysis for GPT2

    Directory of Open Access Journals (Sweden)

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  16. Evaluation of a multiple linear regression model and SARIMA model in forecasting heat demand for district heating system

    International Nuclear Information System (INIS)

    Fang, Tingting; Lahdelma, Risto

    2016-01-01

    Highlights: • Social factor is considered for the linear regression models besides weather file. • Simultaneously optimize all the coefficients for linear regression models. • SARIMA combined with linear regression is used to forecast the heat demand. • The accuracy for both linear regression and time series models are evaluated. - Abstract: Forecasting heat demand is necessary for production and operation planning of district heating (DH) systems. In this study we first propose a simple regression model where the hourly outdoor temperature and wind speed forecast the heat demand. Weekly rhythm of heat consumption as a social component is added to the model to significantly improve the accuracy. The other type of model is the seasonal autoregressive integrated moving average (SARIMA) model with exogenous variables as a combination to take weather factors, and the historical heat consumption data as depending variables. One outstanding advantage of the model is that it peruses the high accuracy for both long-term and short-term forecast by considering both exogenous factors and time series. The forecasting performance of both linear regression models and time series model are evaluated based on real-life heat demand data for the city of Espoo in Finland by out-of-sample tests for the last 20 full weeks of the year. The results indicate that the proposed linear regression model (T168h) using 168-h demand pattern with midweek holidays classified as Saturdays or Sundays gives the highest accuracy and strong robustness among all the tested models based on the tested forecasting horizon and corresponding data. Considering the parsimony of the input, the ease of use and the high accuracy, the proposed T168h model is the best in practice. The heat demand forecasting model can also be developed for individual buildings if automated meter reading customer measurements are available. This would allow forecasting the heat demand based on more accurate heat consumption

  17. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    Science.gov (United States)

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  19. Matter power spectrum and the challenge of percent accuracy

    OpenAIRE

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2015-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day $N$-body methods, identifying main potential error sources from the set-up of initial conditions to...

  20. On Numerical Stability in Large Scale Linear Algebraic Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Liesen, J.

    2005-01-01

    Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005

  1. The Accuracy of 3D Optical Reconstruction and Additive Manufacturing Processes in Reproducing Detailed Subject-Specific Anatomy

    Directory of Open Access Journals (Sweden)

    Paolo Ferraiuoli

    2017-10-01

    Full Text Available 3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI, computed tomography (CT or echocardiography images. Although these modalities allow imaging of the tissues in vivo, assessment of quality of the reconstruction is limited by the lack of a reference geometry as the subject-specific anatomy is unknown prior to image acquisition. In this work, an optical method based on 3D digital image correlation (3D-DIC techniques is used to reconstruct the shape of the surface of an ex vivo porcine heart. This technique requires two digital charge-coupled device (CCD cameras to provide full-field shape measurements and to generate a standard tessellation language (STL file of the sample surface. The aim of this work was to quantify the error of 3D-DIC shape measurements using the additive manufacturing process. The limitations of 3D printed object resolution, the discrepancy in reconstruction of the surface of cardiac soft tissue and a 3D printed model of the same surface were evaluated. The results obtained demonstrated the ability of the 3D-DIC technique to reconstruct localised and detailed features on the cardiac surface with sub-millimeter accuracy.

  2. Soil-to-plant transfer of elements is not linear: Results for five elements relevant to radioactive waste in five boreal forest species

    Energy Technology Data Exchange (ETDEWEB)

    Tuovinen, Tiina S.; Roivainen, Paeivi, E-mail: paivi.roivainen@uef.fi; Makkonen, Sari; Kolehmainen, Mikko; Holopainen, Toini; Juutilainen, Jukka

    2011-12-01

    Element-specific concentration ratios (CRs) assuming that plant uptake of elements is linear are commonly used in radioecological modelling to describe the soil-to-plant transfer of elements. The goal of this study was to investigate the validity of the linearity assumption in boreal forest plants, for which only limited relevant data are available. The soil-to-plant transfer of three essential (Mo, Ni, Zn) and two non-essential (Pb, U) elements relevant to the safety of radioactive waste disposal was studied. Three understory species (blueberry, narrow buckler fern and May lily) and two tree species (Norway spruce and rowan) were included. Examining CRs as a function of soil concentration showed that CR was not constant but decreased with increasing soil concentrations for all elements and plant species. A non-linear equation fitted fairly well with the empirical data; the R{sup 2}-values for this equation were constantly higher than those for the linear fit. The difference between the two fits was most evident at low soil concentrations where the use of constant CRs underestimated transfer from soil to plants. Site-specific factors affected the transfer of Mo and Ni. The results suggested that systematic variation with soil concentrations explains a part of the large variation of empirically determined CRs, and the accuracy of modelling the soil-to-plant transfer might be improved by using non-linear methods. Non-linearity of soil-to-plant transfer has been previously reported for a few different species, elements and environments. The present study systematically tested the linearity assumption for five elements (both essential and non-essential) and in five boreal forest species representing different growth traits and phylogenies. The data supported non-linearity in all cases.

  3. Soil-to-plant transfer of elements is not linear: Results for five elements relevant to radioactive waste in five boreal forest species

    International Nuclear Information System (INIS)

    Tuovinen, Tiina S.; Roivainen, Päivi; Makkonen, Sari; Kolehmainen, Mikko; Holopainen, Toini; Juutilainen, Jukka

    2011-01-01

    Element-specific concentration ratios (CRs) assuming that plant uptake of elements is linear are commonly used in radioecological modelling to describe the soil-to-plant transfer of elements. The goal of this study was to investigate the validity of the linearity assumption in boreal forest plants, for which only limited relevant data are available. The soil-to-plant transfer of three essential (Mo, Ni, Zn) and two non-essential (Pb, U) elements relevant to the safety of radioactive waste disposal was studied. Three understory species (blueberry, narrow buckler fern and May lily) and two tree species (Norway spruce and rowan) were included. Examining CRs as a function of soil concentration showed that CR was not constant but decreased with increasing soil concentrations for all elements and plant species. A non-linear equation fitted fairly well with the empirical data; the R 2 -values for this equation were constantly higher than those for the linear fit. The difference between the two fits was most evident at low soil concentrations where the use of constant CRs underestimated transfer from soil to plants. Site-specific factors affected the transfer of Mo and Ni. The results suggested that systematic variation with soil concentrations explains a part of the large variation of empirically determined CRs, and the accuracy of modelling the soil-to-plant transfer might be improved by using non-linear methods. Non-linearity of soil-to-plant transfer has been previously reported for a few different species, elements and environments. The present study systematically tested the linearity assumption for five elements (both essential and non-essential) and in five boreal forest species representing different growth traits and phylogenies. The data supported non-linearity in all cases.

  4. Plasma Arc Cutting Dimensional Accuracy Optimization employing the Parameter Design approach

    Directory of Open Access Journals (Sweden)

    Kechagias John

    2017-01-01

    Full Text Available Plasma Arc Cutting (PAC is a thermal manufacturing process used for metal plates cutting. This work experimentally investigates the influence of process parameters onto the dimensional accuracy performance of the plasma arc cutting process. The cutting parameters studied were cutting speed (mm/min, torch standoff distance (mm, and arc voltage (volts. Linear dimensions of a rectangular workpiece were measured after PAC cutting following the full factorial design experimental approach. For each one of the three process parameters, three parameter levels were used. Analysis of means (ANOM and analysis of variances (ANOVA were performed in order for the effect of each parameter on the leaner dimensional accuracy to be assessed.

  5. Linear non-threshold (LNT) radiation hazards model and its evaluation

    International Nuclear Information System (INIS)

    Min Rui

    2011-01-01

    In order to introduce linear non-threshold (LNT) model used in study on the dose effect of radiation hazards and to evaluate its application, the analysis of comprehensive literatures was made. The results show that LNT model is more suitable to describe the biological effects in accuracy for high dose than that for low dose. Repairable-conditionally repairable model of cell radiation effects can be well taken into account on cell survival curve in the all conditions of high, medium and low absorbed dose range. There are still many uncertainties in assessment model of effective dose of internal radiation based on the LNT assumptions and individual mean organ equivalent, and it is necessary to establish gender-specific voxel human model, taking gender differences into account. From above, the advantages and disadvantages of various models coexist. Before the setting of the new theory and new model, LNT model is still the most scientific attitude. (author)

  6. Smartphone application for mechanical quality assurance of medical linear accelerators

    Science.gov (United States)

    Kim, Hwiyoung; Lee, Hyunseok; In Park, Jong; Choi, Chang Heon; Park, So-Yeon; Kim, Hee Jung; Kim, Young Suk; Ye, Sung-Joon

    2017-06-01

    Mechanical quality assurance (QA) of medical linear accelerators consists of time-consuming and human-error-prone procedures. We developed a smartphone application system for mechanical QA. The system consists of two smartphones: one attached to a gantry for obtaining real-time information on the mechanical parameters of the medical linear accelerator, and another displaying real-time information via a Bluetooth connection with the former. Motion sensors embedded in the smartphone were used to measure gantry and collimator rotations. Images taken by the smartphone’s high-resolution camera were processed to evaluate accuracies of jaw-positioning, crosshair centering and source-to-surface distance (SSD). The application was developed using Android software development kit and OpenCV library. The accuracy and precision of the system was validated against an optical rotation stage and digital calipers, prior to routine QA measurements of five medical linear accelerators. The system accuracy and precision in measuring angles and lengths were determined to be 0.05  ±  0.05° and 0.25  ±  0.14 mm, respectively. The mean absolute errors (MAEs) in QA measurements of gantry and collimator rotation were 0.05  ±  0.04° and 0.05  ±  0.04°, respectively. The MAE in QA measurements of light field was 0.39  ±  0.36 mm. The MAEs in QA measurements of crosshair centering and SSD were 0.40  ±  0.35 mm and 0.41  ±  0.32 mm, respectively. In conclusion, most routine mechanical QA procedures could be performed using the smartphone application system with improved precision and within a shorter time-frame, while eliminating potential human errors.

  7. Smartphone application for mechanical quality assurance of medical linear accelerators.

    Science.gov (United States)

    Kim, Hwiyoung; Lee, Hyunseok; Park, Jong In; Choi, Chang Heon; Park, So-Yeon; Kim, Hee Jung; Kim, Young Suk; Ye, Sung-Joon

    2017-06-07

    Mechanical quality assurance (QA) of medical linear accelerators consists of time-consuming and human-error-prone procedures. We developed a smartphone application system for mechanical QA. The system consists of two smartphones: one attached to a gantry for obtaining real-time information on the mechanical parameters of the medical linear accelerator, and another displaying real-time information via a Bluetooth connection with the former. Motion sensors embedded in the smartphone were used to measure gantry and collimator rotations. Images taken by the smartphone's high-resolution camera were processed to evaluate accuracies of jaw-positioning, crosshair centering and source-to-surface distance (SSD). The application was developed using Android software development kit and OpenCV library. The accuracy and precision of the system was validated against an optical rotation stage and digital calipers, prior to routine QA measurements of five medical linear accelerators. The system accuracy and precision in measuring angles and lengths were determined to be 0.05  ±  0.05° and 0.25  ±  0.14 mm, respectively. The mean absolute errors (MAEs) in QA measurements of gantry and collimator rotation were 0.05  ±  0.04° and 0.05  ±  0.04°, respectively. The MAE in QA measurements of light field was 0.39  ±  0.36 mm. The MAEs in QA measurements of crosshair centering and SSD were 0.40  ±  0.35 mm and 0.41  ±  0.32 mm, respectively. In conclusion, most routine mechanical QA procedures could be performed using the smartphone application system with improved precision and within a shorter time-frame, while eliminating potential human errors.

  8. An effective description of dark matter and dark energy in the mildly non-linear regime

    Energy Technology Data Exchange (ETDEWEB)

    Lewandowski, Matthew; Senatore, Leonardo [Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94306 (United States); Maleknejad, Azadeh, E-mail: matthew.lewandowski@cea.fr, E-mail: azade@ipm.ir, E-mail: senatore@stanford.edu [School of Physics, Institute for Research in Fundamental Sciences (IPM), P. Code. 19538-33511, Tehran (Iran, Islamic Republic of)

    2017-05-01

    In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the information resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. The Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.

  9. An assessment of the accuracy of contrast enema for the diagnosis ...

    African Journals Online (AJOL)

    Diagnostic accuracy levels were calculated by comparing radiological results with histology results, which is the gold standard. Results: Diagnostic accuracy of contrast enema was 78%, sensitivity was 94.4% and the negative predictive value was 95.7%. Specificity (68.8%) and positive predictive values (63%) were ...

  10. Learning a Nonnegative Sparse Graph for Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung

    2015-09-01

    Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.

  11. The number of subjects per variable required in linear regression analyses

    NARCIS (Netherlands)

    P.C. Austin (Peter); E.W. Steyerberg (Ewout)

    2015-01-01

    textabstractObjectives To determine the number of independent variables that can be included in a linear regression model. Study Design and Setting We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression

  12. Development of Sensitive and Specific Analysis of Vildagliptin in Pharmaceutical Formulation by Gas Chromatography-Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Ebru Uçaktürk

    2015-01-01

    Full Text Available A sensitive and selective gas chromatography-mass spectrometry (GC-MS method was developed and fully validated for the determination of vildagliptin (VIL in pharmaceutical formulation. Prior to GC-MS analysis, VIL was efficiently derivatized with MSTFA/NH4I/β-mercaptoethanol at 60°C for 30 min. The obtained O-TMS derivative of VIL was detected by selected ion monitoring mode using the diagnostic ions m/z 223 and 252. Nandrolone was chosen as internal standard. The GC-MS method was fully validated by the following validation parameters: limit of detection (LOD and quantitation (LOQ, linearity, precision, accuracy, specificity, stability, robustness, and ruggedness. LOD and LOQ were found to be 1.5 and 3.5 ng mL−1, respectively. The GC-MS method is linear in the range of 3.5–300 ng mL−1. The intra- and interday precision values were less than ≤3.62%. The intra- and interday accuracy values were found in the range of -0.26–2.06%. Finally, the GC-MS method was successfully applied to determine VIL in pharmaceutical formulation.

  13. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    Science.gov (United States)

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  14. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    Directory of Open Access Journals (Sweden)

    Elena Bergamini

    2014-10-01

    Full Text Available Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter and complementary (Non-linear observer filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles and heading (yaw angle errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.

  15. Accuracy of automated classification of major depressive disorder as a function of symptom severity.

    Science.gov (United States)

    Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell

    2016-01-01

    Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  16. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  17. Measurement of the specific heats of Santowax 'R', para-, meta- and ortho-terphenyl, diphenyl and dowtherm 'A'

    International Nuclear Information System (INIS)

    Bowring, R.W.; Garton, D.A.; Norris, H.F.

    1960-12-01

    New absolute measurements have been made of the specific heats of Santowax 'R1, the terphenyl isomers, diphenyl and Dowtherm 'A'. An adiabatic calorimeter was used in which the sample was heated electrically while a surrounding jacket was maintained at the same temperature as the calorimeter. The specific heats of all materials tested were found to increase linearly with temperature, the slope being substantially the same for all the pure materials except para-terphenyl. The specific heat of Santowax 'R' was about 1/2% less than the weighted mean of its components. The probable accuracy of the measurements was ± 2% and this was confirmed by comparison with diphenyl ether. A summary of results is given in Table 1 and Figure 10. (author)

  18. Studies on the diagnostic accuracy of lymphography

    International Nuclear Information System (INIS)

    Luening, M.; Stargardt, A.; Abet, L.

    1979-01-01

    Contradictory reports in the literature on the reliability of lymphography stimulated the authors to test the diagnostic accuracy, employing methods which are approximately analogous to practice, using carcinoma of the cervix as the model on which the study was carried out. Using 21 observers it was found that there was no correlation between their experience and on-target accuracy of the diagnosis. Good observers obtained an accuracy of 85% with good proportions between sensitivity in the recognition of detail, specificity and readiness to arrive at a decision on the basis of discriminatory ability. With the help of the concept of the ROC curves, the position taken up by the observers in respect of diagnostic decisions, and a complex manner of assessing the various characteristic factors determining diagnostic accuracy, are demonstrated. This form of test, which permits manipulation of different variants of diagnosis, is recommended, among other things, for performance control at the end of training and continuing education courses in other fields of x-ray diagnosis as well. (orig.) [de

  19. The relationships between rugby ground pass accuracy and ...

    African Journals Online (AJOL)

    Kathryn van Boom

    2 Department of Physiotherapy, School of Therapeutic Sciences, Faculty of. Health Sciences ... upper body kinematics of the players, specifically the rotations of the torso and ..... relationships between the body movements and pass accuracy ...

  20. Need for Linear Revitalization - Gdynia Case

    Science.gov (United States)

    Sas-Bojarska, Aleksandra

    2017-10-01

    The aim of the article is to discuss the need of defining and implementation of the linear revitalization - the new approach related to the revitalization processes. The results of the preliminary investigations indicate that this kind of revitalization seems to be an important tool supporting city management and planning, especially in cases of cities fragmentation - causing lack of physical, social, economic and ecological cohesion. The problems which may occur in such situations could be, in author’s opinion, solved with the use of linear revitalization. Linear revitalization relates to various linear city structures, which need a renewal. The article presents the idea of new attitude, character of specific actions related to degraded linear structures, draft classification, as well as the potential benefits to the city structure which could be reached due to the linear revitalization implementation. The theoretical deliberations are supplemented by the description and assessment of the chosen case study from Gdynia in Poland. The Kwiatkowskiego Route in Gdynia, playing important role in the city traffic as the external connection, creates the barrier in the city structure, causing many negative effects. Author presents specific problems related to chosen example, and the ways to solve them and to connect city structure. The main conclusion of the study is that the presented approach may be, in author’s opinion, the beginning of the discussion related to the linear revitalization, which may become an important and effective tool of sustainable city development. It may help overcoming physical barriers, and minimise functional, economic, social, mental and environmental conflicts caused by city fragmentation.

  1. Linear nanometric tunnel junction sensors with exchange pinned sensing layer

    International Nuclear Information System (INIS)

    Leitao, D. C.; Silva, A. V.; Cardoso, S.; Ferreira, R.; Paz, E.; Deepack, F. L.; Freitas, P. P.

    2014-01-01

    Highly sensitive nanosensors with high spatial resolution provide the necessary features for high accuracy imaging of isolated magnetic nanoparticles. In this work, we report the fabrication and characterization of MgO-barrier magnetic tunnel junction nanosensors, with two exchange-pinned electrodes. The perpendicular magnetization configuration for field sensing is set using a two-step annealing process, where the second annealing temperature was optimized to yield patterned sensors responses with improved linearity. The optimized circular nanosensors show sensitivities up to 0.1%/Oe, larger than previously reported for nanometric sensors and comparable to micrometric spin-valves. Our strategy avoids the use of external permanent biasing or demagnetizing fields (large for smaller structures) to achieve a linear response, enabling the control of the linear operation range using only the stack and thus providing a small footprint device

  2. Linear nanometric tunnel junction sensors with exchange pinned sensing layer

    Energy Technology Data Exchange (ETDEWEB)

    Leitao, D. C., E-mail: dleitao@inesc-mn.pt; Silva, A. V.; Cardoso, S. [INESC-MN and IN, Rua Alves Redol 9, 1000-029 Lisboa (Portugal); Instituto Superior Técnico (IST), Universidade de Lisboa, Av. Rovisco Pais, 1000-029 Lisboa (Portugal); Ferreira, R.; Paz, E.; Deepack, F. L. [INL, Av. Mestre Jose Veiga, 4715-31 Braga (Portugal); Freitas, P. P. [INESC-MN and IN, Rua Alves Redol 9, 1000-029 Lisboa (Portugal); INL, Av. Mestre Jose Veiga, 4715-31 Braga (Portugal)

    2014-05-07

    Highly sensitive nanosensors with high spatial resolution provide the necessary features for high accuracy imaging of isolated magnetic nanoparticles. In this work, we report the fabrication and characterization of MgO-barrier magnetic tunnel junction nanosensors, with two exchange-pinned electrodes. The perpendicular magnetization configuration for field sensing is set using a two-step annealing process, where the second annealing temperature was optimized to yield patterned sensors responses with improved linearity. The optimized circular nanosensors show sensitivities up to 0.1%/Oe, larger than previously reported for nanometric sensors and comparable to micrometric spin-valves. Our strategy avoids the use of external permanent biasing or demagnetizing fields (large for smaller structures) to achieve a linear response, enabling the control of the linear operation range using only the stack and thus providing a small footprint device.

  3. Solving the apparent diversity-accuracy dilemma of recommender systems.

    Science.gov (United States)

    Zhou, Tao; Kuscsik, Zoltán; Liu, Jian-Guo; Medo, Matús; Wakeling, Joseph Rushton; Zhang, Yi-Cheng

    2010-03-09

    Recommender systems use data on past user preferences to predict possible future likes and interests. A key challenge is that while the most useful individual recommendations are to be found among diverse niche objects, the most reliably accurate results are obtained by methods that recommend objects based on user or object similarity. In this paper we introduce a new algorithm specifically to address the challenge of diversity and show how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm. By tuning the hybrid appropriately we are able to obtain, without relying on any semantic or context-specific information, simultaneous gains in both accuracy and diversity of recommendations.

  4. Evaluation of the accuracy of linear measurements on multi-slice and cone beam computed tomography scans to detect the mandibular canal during bilateral sagittal split osteotomy of the mandible.

    Science.gov (United States)

    Freire-Maia, B; Machado, V deC; Valerio, C S; Custódio, A L N; Manzi, F R; Junqueira, J L C

    2017-03-01

    The aim of this study was to compare the accuracy of linear measurements of the distance between the mandibular cortical bone and the mandibular canal using 64-detector multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT). It was sought to evaluate the reliability of these examinations in detecting the mandibular canal for use in bilateral sagittal split osteotomy (BSSO) planning. Eight dry human mandibles were studied. Three sites, corresponding to the lingula, the angle, and the body of the mandible, were selected. After the CT scans had been obtained, the mandibles were sectioned and the bone segments measured to obtain the actual measurements. On analysis, no statistically significant difference was found between the measurements obtained through MSCT and CBCT, or when comparing the measurements from these scans with the actual measurements. It is concluded that the images obtained by CT scan, both 64-detector multi-slice and cone beam, can be used to obtain accurate linear measurements to locate the mandibular canal for preoperative planning of BSSO. The ability to correctly locate the mandibular canal during BSSO will reduce the occurrence of neurosensory disturbances in the postoperative period. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Linear accelerator stereotactic radiosurgery for trigeminal neuralgia.

    Science.gov (United States)

    Varela-Lema, Leonor; Lopez-Garcia, Marisa; Maceira-Rozas, Maria; Munoz-Garzon, Victor

    2015-01-01

    Stereotactic radiosurgery is accepted as an alternative for patients with refractory trigeminal neuralgia, but existing evidence is fundamentally based on the Gamma Knife, which is a specific device for intracranial neurosurgery, available in few facilities. Over the last decade it has been shown that the use of linear accelerators can achieve similar diagnostic accuracy and equivalent dose distribution. To assess the effectiveness and safety of linear-accelerator stereotactic radiosurgery for the treatment of patients with refractory trigeminal neuralgia. We carried out a systematic search of the literature in the main electronic databases (PubMed, Embase, ISI Web of Knowledge, Cochrane, Biomed Central, IBECS, IME, CRD) and reviewed grey literature. All original studies on the subject published in Spanish, French, English, and Portuguese were eligible for inclusion. The selection and critical assessment was carried out by 2 independent reviewers based on pre-defined criteria. In view of the impossibility of carrying out a pooled analysis, data were analyzed in a qualitative way. Eleven case series were included. In these, satisfactory pain relief (BIN I-IIIb or reduction in pain = 50) was achieved in 75% to 95.7% of the patients treated. The mean time to relief from pain ranged from 8.5 days to 3.8 months. The percentage of patients who presented with recurrences after one year of follow-up ranged from 5% to 28.8%. Facial swelling or hypoesthesia, mostly of a mild-moderate grade appeared in 7.5% - 51.9% of the patients. Complete anaesthesia dolorosa was registered in only study (5.3%). Cases of hearing loss (2.5%), brainstem edema (5.8%), and neurotrophic keratoplasty (3.5%) were also isolated. The results suggest that stereotactic radiosurgery with linear accelerators could constitute an effective and safe therapeutic alternative for drug-resistant trigeminal neuralgia. However, existing studies leave important doubts as to optimal treatment doses or the

  6. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    International Nuclear Information System (INIS)

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-01-01

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes

  7. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    Energy Technology Data Exchange (ETDEWEB)

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au [Westmead Breast Cancer Institute, Westmead, New South Wales (Australia); Poulos, Ann E [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Lidcombe, New South Wales (Australia); Houssami, Nehmat [Screening and Test Evaluation Program, School of Public Health (A27), Sydney Medical School, University of Sydney, Sydney, New South Wales (Australia); Turner, Robin M [School of Public Health and Community Medicine, University of New South Wales, Sydney, New South Wales (Australia); Boyages, John [Macquarie University Cancer Institute, Macquarie University Hospital, Australian School of Advanced Medicine, Macquarie University, Sydney, New South Wales (Australia); Westmead Breast Cancer Institute, Westmead, New South Wales (Australia)

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  8. Quad-copter UAV BLDC Motor Control: Linear v/s non-linear control maps

    Directory of Open Access Journals (Sweden)

    Deep Parikh

    2015-08-01

    Full Text Available This paper presents some investigations and comparison of using linear versus non-linear static motor-control maps for the speed control of a BLDC (Brush Less Direct Current motors used in quad-copter UAV (Unmanned Aerial Vehicles. The motor-control map considered here is the inverse of the static map relating motor-speed output to motor-voltage input for a typical out-runner type Brushless DC Motors (BLDCM.  Traditionally, quad-copter BLDC motor speed control uses simple linear motor-control map defined by the motor-constant specification. However, practical BLDC motors show non-linear characteristic, particularly when operated across wide operating speed-range as is commonly required in quad-copter UAV flight operations. In this paper, our investigations to compare performance of linear versus non-linear motor-control maps are presented. The investigations cover simulation-based and experimental study of BLDC motor speed control systems for  quad-copter vehicle available. First the non-linear map relating rotor RPM to motor voltage for quad-copter BLDC motor is obtained experimentally using an optical speed encoder. The performance of the linear versus non-linear motor-control-maps for the speed control are studied. The investigations also cover study of time-responses for various standard test input-signals e.g. step, ramp and pulse inputs, applied as the reference speed-commands. Also, simple 2-degree of freedom test-bed is developed in our laboratory to help test the open-loop and closed-loop experimental investigations. The non-linear motor-control map is found to perform better in BLDC motor speed tracking control performance and thereby helping achieve better quad-copter roll-angle attitude control.

  9. Breed-specific fetal biometry and factors affecting the prediction of whelping date in the German shepherd dog.

    Science.gov (United States)

    Groppetti, D; Vegetti, F; Bronzo, V; Pecile, A

    2015-01-01

    To date many studies have been published about predicting parturition by ultrasonographic fetal measurements in the bitch. Given that accuracy in such prediction is a key point for clinicians and breeders, formulas to calculate the whelping date were mainly obtained from small and medium sized dogs, which means poor accuracy when applied to large or giant breeds. Based on the evidence that ethnicity significantly affects fetal biometry in humans, this study aimed at developing a breed-specific linear regression model for estimating parturition date in the German shepherd dog. For this purpose, serial ultrasonographic measurements of the inner chorionic cavity diameter (ICC) and the fetal biparietal diameter (BP) were collected in 40 pregnant German shepherd bitches. The quality of the regression models for estimating parturition date was further verified in 22 other pregnant German shepherd bitches. Accuracy related to the prediction of parturition date was higher than previously reported: 94.5% and 91.7% within ±2 days interval based on ICC and BP measurements, respectively. Additional investigation was performed on the effects of maternal weight, age and litter size in relation to fetal biometry and to accuracy of parturition estimation. Moreover, the study included a comparison between hormonal and fetal ultrasound (ICC and BP) measurements connected to the estimation of whelping date. We suggest that specific equations from a single breed are likely to offer excellent accuracy, comparable to that of periovulatory progesteronemia, in parturition prediction and to avoid morphological variables present in dogs of different breeds even with the same size/weight. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    Science.gov (United States)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  11. The JPL Hg(sup +) Extended Linear Ion Trap Frequency Standard: Status, Stability, and Accuracy Prospects

    Science.gov (United States)

    Tjoelker, R. L.; Prestage, J. D.; Maleki, L.

    1996-01-01

    Microwave frequency standards based on room temperature (sup 199)Hg(sup +) ions in a Linear Ion Trap (LITS) presently achieve a Signal to Noise and line Q inferred short frequency stability. Long term stability has been measured for averaging intervals up to 5 months with apparent sensitivity to variations in ion number/temperature limiting the flicker floor.

  12. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  13. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Directory of Open Access Journals (Sweden)

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  14. Petri Nets as Models of Linear Logic

    DEFF Research Database (Denmark)

    Engberg, Uffe Henrik; Winskel, Glynn

    1990-01-01

    The chief purpose of this paper is to appraise the feasibility of Girad's linear logic as a specification language for parallel processes. To this end we propose an interpretation of linear logic in Petri nets, with respect to which we investigate the expressive power of the logic...

  15. Sex determination using facial linear dimensions and angles among Hausa population of Kano State, Nigeria

    Directory of Open Access Journals (Sweden)

    Lawan H. Adamu

    2016-12-01

    Full Text Available The aim of the study was to determine sexual dimorphism as well as to predict sex using facial linear dimensions and angles among Hausas of Kano state Nigeria. A total of 283 subjects comprising 147 males and 136 females age range 18–25 years participated. Photographs methods were used to capture the face. Independent sample t-test was used to test for sex differences in the variables. Binary logistic regression was applied to obtain a predicting equation (BLR model for sex. The predicted probabilities of BLR were analyzed using receiver operating characteristic curve. The results showed that all the facial linear dimensions showed significance sexual dimorphism except interocular distance, upper facial width, philtrum length, lower vermilion width, left and right orbital width. With regards to sex prediction, upper facial height was the single best predictor of sex with an accuracy of 76.2% and 24–33% contribution to the prediction. However, the percentage accuracy increased to 91% when six variables were pooled together in the equations. For facial angles, only nasion and aperture modified angle did not show significant gender differences. However, in the variables with significant sexual dimorphism only nasomental angle showed a significant level of sex prediction with an accuracy of 70.3%. In conclusion, sex discrimination using facial linear dimensions and angles was well established in this study. The sex of an individual of Hausa ethnic group can be determined using facial linear dimensions. Dispite sexual dimorphsm shown by facial angles, only nasomental angle was good discriminator of sex.

  16. Applications of equivalent linearization approaches to nonlinear piping systems

    International Nuclear Information System (INIS)

    Park, Y.; Hofmayer, C.; Chokshi, N.

    1997-01-01

    The piping systems in nuclear power plants, even with conventional snubber supports, are highly complex nonlinear structures under severe earthquake loadings mainly due to various mechanical gaps in support structures. Some type of nonlinear analysis is necessary to accurately predict the piping responses under earthquake loadings. The application of equivalent linearization approaches (ELA) to seismic analyses of nonlinear piping systems is presented. Two types of ELA's are studied; i.e., one based on the response spectrum method and the other based on the linear random vibration theory. The test results of main steam and feedwater piping systems supported by snubbers and energy absorbers are used to evaluate the numerical accuracy and limitations

  17. Validity of a Commercial Linear Encoder to Estimate Bench Press 1 RM from the Force-Velocity Relationship

    Science.gov (United States)

    Bosquet, Laurent; Porta-Benache, Jeremy; Blais, Jérôme

    2010-01-01

    The aim of this study was to assess the validity and accuracy of a commercial linear encoder (Musclelab, Ergotest, Norway) to estimate Bench press 1 repetition maximum (1RM) from the force - velocity relationship. Twenty seven physical education students and teachers (5 women and 22 men) with a heterogeneous history of strength training participated in this study. They performed a 1 RM test and a force - velocity test using a Bench press lifting task in a random order. Mean 1 RM was 61.8 ± 15.3 kg (range: 34 to 100 kg), while 1 RM estimated by the Musclelab’s software from the force-velocity relationship was 56.4 ± 14.0 kg (range: 33 to 91 kg). Actual and estimated 1 RM were very highly correlated (r = 0.93, p<0.001) but largely different (Bias: 5.4 ± 5.7 kg, p < 0.001, ES = 1.37). The 95% limits of agreement were ±11.2 kg, which represented ±18% of actual 1 RM. It was concluded that 1 RM estimated from the force-velocity relationship was a good measure for monitoring training induced adaptations, but also that it was not accurate enough to prescribe training intensities. Additional studies are required to determine whether accuracy is affected by age, sex or initial level. Key points Some commercial devices allow to estimate 1 RM from the force-velocity relationship. These estimations are valid. However, their accuracy is not high enough to be of practical help for training intensity prescription. Day-to-day reliability of force and velocity measured by the linear encoder has been shown to be very high, but the specific reliability of 1 RM estimated from the force-velocity relationship has to be determined before concluding to the usefulness of this approach in the monitoring of training induced adaptations. PMID:24149641

  18. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  19. Efficient strategies to find diagnostic test accuracy studies in kidney journals.

    Science.gov (United States)

    Rogerson, Thomas E; Ladhani, Maleeka; Mitchell, Ruth; Craig, Jonathan C; Webster, Angela C

    2015-08-01

    Nephrologists looking for quick answers to diagnostic clinical questions in MEDLINE can use a range of published search strategies or Clinical Query limits to improve the precision of their searches. We aimed to evaluate existing search strategies for finding diagnostic test accuracy studies in nephrology journals. We assessed the accuracy of 14 search strategies for retrieving diagnostic test accuracy studies from three nephrology journals indexed in MEDLINE. Two investigators hand searched the same journals to create a reference set of diagnostic test accuracy studies to compare search strategy results against. We identified 103 diagnostic test accuracy studies, accounting for 2.1% of all studies published. The most specific search strategy was the Narrow Clinical Queries limit (sensitivity: 0.20, 95% CI 0.13-0.29; specificity: 0.99, 95% CI 0.99-0.99). Using the Narrow Clinical Queries limit, a searcher would need to screen three (95% CI 2-6) articles to find one diagnostic study. The most sensitive search strategy was van der Weijden 1999 Extended (sensitivity: 0.95; 95% CI 0.89-0.98; specificity 0.55, 95% CI 0.53-0.56) but required a searcher to screen 24 (95% CI 23-26) articles to find one diagnostic study. Bachmann 2002 was the best balanced search strategy, which was sensitive (0.88, 95% CI 0.81-0.94), but also specific (0.74, 95% CI 0.73-0.75), with a number needed to screen of 15 (95% CI 14-17). Diagnostic studies are infrequently published in nephrology journals. The addition of a strategy for diagnostic studies to a subject search strategy in MEDLINE may reduce the records needed to screen while preserving adequate search sensitivity for routine clinical use. © 2015 Asian Pacific Society of Nephrology.

  20. Does aging impair first impression accuracy? Differentiating emotion recognition from complex social inferences.

    Science.gov (United States)

    Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini

    2014-09-01

    Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. Hadronic cross-sections in two photon processes at a future linear collider

    International Nuclear Information System (INIS)

    Godbole, Rohini M.; Roeck, Albert de; Grau, Agnes; Pancheri, Giulia

    2003-01-01

    In this note we address the issue of measurability of the hadronic cross-sections at a future photon collider as well as for the two-photon processes at a future high energy linear e + e - collider. We extend, to higher energy, our previous estimates of the accuracy with which the γ γ cross-section needs to be measured, in order to distinguish between different theoretical models of energy dependence of the total cross-sections. We show that the necessary precision to discriminate among these models is indeed possible at future linear colliders in the Photon Collider option. Further we note that even in the e + e - option a measurement of the hadron production cross-section via γ γ processes, with an accuracy necessary to allow discrimination between different theoretical models, should be possible. We also comment briefly on the implications of these predictions for hadronic backgrounds at the future TeV energy e + e - collider CLIC. (author)

  2. Modified linear predictive coding approach for moving target tracking by Doppler radar

    Science.gov (United States)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  3. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  4. Optical linear algebra processors - Architectures and algorithms

    Science.gov (United States)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  5. Plasma Arc Cutting Dimensional Accuracy Optimization employing the Parameter Design approach

    OpenAIRE

    Kechagias John; Petousis Markos; Vidakis Nectarios; Mastorakis Nikos

    2017-01-01

    Plasma Arc Cutting (PAC) is a thermal manufacturing process used for metal plates cutting. This work experimentally investigates the influence of process parameters onto the dimensional accuracy performance of the plasma arc cutting process. The cutting parameters studied were cutting speed (mm/min), torch standoff distance (mm), and arc voltage (volts). Linear dimensions of a rectangular workpiece were measured after PAC cutting following the full factorial design experimental approach. For ...

  6. Sustainability analysis and systems of linear equations in the era of data abundance

    NARCIS (Netherlands)

    Heijungs, R.; de Koning, A.; Wegener Sleeswijk, A.

    2015-01-01

    Sustainability research has entered an era of data abundance, in which online repositories offer millions of facts on production, consumption, pollution, and impacts. The combination of such facts in linear models leads increasingly to computational problems, relating to memory, speed, accuracy, and

  7. Some Alignment Considerations for the Next Linear Collider

    International Nuclear Information System (INIS)

    Ruland, R

    2004-01-01

    Next Linear Collider type accelerators require a new level of alignment quality. The relative alignment of these machines is to be maintained in an error envelope dimensioned in micrometers and for certain parts in nanometers. In the nanometer domain our terra firma cannot be considered monolithic but compares closer to jelly. Since conventional optical alignment methods cannot deal with the dynamics and cannot approach the level of accuracy, special alignment and monitoring techniques must be pursued

  8. Are computer numerical control (CNC)-manufactured patient-specific metal templates available for posterior thoracic pedicle screw insertion? Feasibility and accuracy evaluation.

    Science.gov (United States)

    Kong, Xiangxue; Tang, Lei; Ye, Qiang; Huang, Wenhua; Li, Jianyi

    2017-11-01

    Accurate and safe posterior thoracic pedicle insertion (PTPI) remains a challenge. Patient-specific drill templates (PDTs) created by rapid prototyping (RP) can assist in posterior thoracic pedicle insertion, but pose biocompatibility risks. The aims of this study were to develop alternative PDTs with computer numerical control (CNC) and assess their feasibility and accuracy in assisting PTPI. Preoperative CT images of 31 cadaveric thoracic vertebras were obtained and then the optimal pedicle screw trajectories were planned. The PDTs with optimal screw trajectories were randomly assigned to be designed and manufactured by CNC or RP in each vertebra. With the guide of the CNC- or RP-manufactured PDTs, the appropriate screws were inserted into the pedicles. Postoperative CT scans were performed to analyze any deviations at entry point and midpoint of the pedicles. The CNC group was found to be significant manufacture-time-shortening, and cost-decreasing, when compared with the RP group (P  0.05). The screw positions were grade 0 in 90.3% and grade 1 in 9.7% of the cases in the CNC group and grade 0 in 93.5% and grade 1 in 6.5% of the cases in the RP group (P = 0.641). CNC-manufactured PDTs are viable for assisting in PTPI with good feasibility and accuracy.

  9. Linear thermal expansion coefficient of MgAl2O4(s)

    International Nuclear Information System (INIS)

    Dash, A.; Samui, P.; Naik, Y.P.; Chaudhary, Z.S.

    2011-01-01

    The coefficient of linear thermal expansion (α av ) of MgAl 2 O 4 (s) has been determined using a Netzsch 402 PC dilatometer with Al 2 O 3 (s) as the push-rod. The change in length per unit length was recorded as a function of temperature between room temperature to 1273 K at a heating rate of 8 K.min /1 , in argon flowing atmosphere. The average of three measurements was quoted as the α av for MgAl 2 O 4 (s). The linear thermal expansion was measured to an accuracy of ±3%. (author)

  10. JET VELOCITY OF LINEAR SHAPED CHARGES

    Directory of Open Access Journals (Sweden)

    Vječislav Bohanek

    2012-12-01

    Full Text Available Shaped explosive charges with one dimension significantly larger than the other are called linear shaped charges. Linear shaped charges are used in various industries and are applied within specific technologies for metal cutting, such as demolition of steel structures, separating spent rocket fuel tanks, demining, cutting holes in the barriers for fire service, etc. According to existing theories and models efficiency of linear shaped charges depends on the kinetic energy of the jet which is proportional to square of jet velocity. The original method for measuring velocity of linear shaped charge jet is applied in the aforementioned research. Measurements were carried out for two different linear materials, and the results are graphically presented, analysed and compared. Measurement results show a discrepancy in the measured velocity of the jet for different materials with the same ratio between linear and explosive mass (M/C per unit of surface, which is not described by presented models (the paper is published in Croatian.

  11. Predicting hyperketonemia by logistic and linear regression using test-day milk and performance variables in early-lactation Holstein and Jersey cows.

    Science.gov (United States)

    Chandler, T L; Pralle, R S; Dórea, J R R; Poock, S E; Oetzel, G R; Fourdraine, R H; White, H M

    2018-03-01

    Although cowside testing strategies for diagnosing hyperketonemia (HYK) are available, many are labor intensive and costly, and some lack sufficient accuracy. Predicting milk ketone bodies by Fourier transform infrared spectrometry during routine milk sampling may offer a more practical monitoring strategy. The objectives of this study were to (1) develop linear and logistic regression models using all available test-day milk and performance variables for predicting HYK and (2) compare prediction methods (Fourier transform infrared milk ketone bodies, linear regression models, and logistic regression models) to determine which is the most predictive of HYK. Given the data available, a secondary objective was to evaluate differences in test-day milk and performance variables (continuous measurements) between Holsteins and Jerseys and between cows with or without HYK within breed. Blood samples were collected on the same day as milk sampling from 658 Holstein and 468 Jersey cows between 5 and 20 d in milk (DIM). Diagnosis of HYK was at a serum β-hydroxybutyrate (BHB) concentration ≥1.2 mmol/L. Concentrations of milk BHB and acetone were predicted by Fourier transform infrared spectrometry (Foss Analytical, Hillerød, Denmark). Thresholds of milk BHB and acetone were tested for diagnostic accuracy, and logistic models were built from continuous variables to predict HYK in primiparous and multiparous cows within breed. Linear models were constructed from continuous variables for primiparous and multiparous cows within breed that were 5 to 11 DIM or 12 to 20 DIM. Milk ketone body thresholds diagnosed HYK with 64.0 to 92.9% accuracy in Holsteins and 59.1 to 86.6% accuracy in Jerseys. Logistic models predicted HYK with 82.6 to 97.3% accuracy. Internally cross-validated multiple linear regression models diagnosed HYK of Holstein cows with 97.8% accuracy for primiparous and 83.3% accuracy for multiparous cows. Accuracy of Jersey models was 81.3% in primiparous and 83

  12. Assessing Genomic Selection Prediction Accuracy in a Dynamic Barley Breeding Population

    Directory of Open Access Journals (Sweden)

    A. H. Sallam

    2015-03-01

    Full Text Available Prediction accuracy of genomic selection (GS has been previously evaluated through simulation and cross-validation; however, validation based on progeny performance in a plant breeding program has not been investigated thoroughly. We evaluated several prediction models in a dynamic barley breeding population comprised of 647 six-row lines using four traits differing in genetic architecture and 1536 single nucleotide polymorphism (SNP markers. The breeding lines were divided into six sets designated as one parent set and five consecutive progeny sets comprised of representative samples of breeding lines over a 5-yr period. We used these data sets to investigate the effect of model and training population composition on prediction accuracy over time. We found little difference in prediction accuracy among the models confirming prior studies that found the simplest model, random regression best linear unbiased prediction (RR-BLUP, to be accurate across a range of situations. In general, we found that using the parent set was sufficient to predict progeny sets with little to no gain in accuracy from generating larger training populations by combining the parent set with subsequent progeny sets. The prediction accuracy ranged from 0.03 to 0.99 across the four traits and five progeny sets. We explored characteristics of the training and validation populations (marker allele frequency, population structure, and linkage disequilibrium, LD as well as characteristics of the trait (genetic architecture and heritability, . Fixation of markers associated with a trait over time was most clearly associated with reduced prediction accuracy for the mycotoxin trait DON. Higher trait in the training population and simpler trait architecture were associated with greater prediction accuracy.

  13. A method for estimation of accuracy of dose delivery with dynamic slit windows in medical linear accelerators

    International Nuclear Information System (INIS)

    Ravichandran, R.; Binukumar, J.P.; Sivakumar, S.S.; Krishnamurthy, K.; Davis, C.A.

    2008-01-01

    Intensity-modulated radiotherapy (IMRT) clinical dose delivery is based on computer-controlled multileaf movements at different velocities. To test the accuracy of modulation of the beam periodically, quality assurance (QA) methods are necessary. Using a cylindrical phantom, dose delivery was checked at a constant geometry for sweeping fields. Repeated measurements with an in-house designed methodology over a period of 1 year indicate that the method is very sensitive to check the proper functioning of such dose delivery in medical linacs. A cylindrical perspex phantom with facility to accurately position a 0.6- cc (FC 65) ion chamber at constant depth at isocenter, (SA 24 constancy check tool phantom for MU check, Scanditronix Wellhofer) was used. Dosimeter readings were integrated for 4-mm, 10-mm, 20-mm sweeping fields and for 3 angular positions of the gantry periodically. Consistency of standard sweeping field output (10-mm slit width) and the ratios of outputs against other slit widths over a long period were reported. A 10-mm sweeping field output was found reproducible within an accuracy of 0.03% (n = 25) over 1 year. Four-millimeter, 20-mm outputs expressed as ratio with respect to 10- mm sweep output remained within a mean deviation of 0.2% and 0.03% respectively. Outputs at 3 gantry angles remained within 0.5%, showing that the effect of dynamic movements of multileaf collimator (MLC) on the output is minimal for angular positions of gantry. This method of QA is very simple and is recommended in addition to individual patient QA measurements, which reflect the accuracy of dose planning system. In addition to standard output and energy checks of linacs, the above measurements can be complemented so as to check proper functioning of multileaf collimator for dynamic field dose delivery. (author)

  14. Temporal subtraction in chest radiography: Automated assessment of registration accuracy

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Doshi, Devang J.; Engelmann, Roger; Croteau, Charles L.; MacMahon, Heber

    2006-01-01

    Radiologists routinely compare multiple chest radiographs acquired from the same patient over time to more completely understand changes in anatomy and pathology. While such comparisons are achieved conventionally through a side-by-side display of images, image registration techniques have been developed to combine information from two separate radiographic images through construction of a 'temporal subtraction image'. Although temporal subtraction images provide a powerful mechanism for the enhanced visualization of subtle change, errors in the clinical evaluation of these images may arise from misregistration artifacts that can mimic or obscure pathologic change. We have developed a computerized method for the automated assessment of registration accuracy as demonstrated in temporal subtraction images created from radiographic chest image pairs. The registration accuracy of 150 temporal subtraction images constructed from the computed radiography images of 72 patients was rated manually using a five-point scale ranging from '5-excellent' to '1-poor'; ratings of 3, 4, or 5 reflected clinically acceptable subtraction images, and ratings of 1 or 2 reflected clinically unacceptable images. Gray-level histogram-based features and texture measures are computed at multiple spatial scales within a 'lung mask' region that encompasses both lungs in the temporal subtraction images. A subset of these features is merged through a linear discriminant classifier. With a leave-one-out-by-patient training/testing paradigm, the automated method attained an A z value of 0.92 in distinguishing between temporal subtraction images that demonstrated clinically acceptable and clinically unacceptable registration accuracy. A second linear discriminant classifier yielded an A z value of 0.82 based on a feature subset selected from an independent database of digitized film images. These methods are expected to advance the clinical utility of temporal subtraction images for chest

  15. Accuracy and Training Population Design for Genomic Selection on Quantitative Traits in Elite North American Oats

    Directory of Open Access Journals (Sweden)

    Franco G. Asoro

    2011-07-01

    Full Text Available Genomic selection (GS is a method to estimate the breeding values of individuals by using markers throughout the genome. We evaluated the accuracies of GS using data from five traits on 446 oat ( L. lines genotyped with 1005 Diversity Array Technology (DArT markers and two GS methods (ridge regression–best linear unbiased prediction [RR-BLUP] and BayesCπ under various training designs. Our objectives were to (i determine accuracy under increasing marker density and training population size, (ii assess accuracies when data is divided over time, and (iii examine accuracy in the presence of population structure. Accuracy increased as the number of markers and training size become larger. Including older lines in the training population increased or maintained accuracy, indicating that older generations retained information useful for predicting validation populations. The presence of population structure affected accuracy: when training and validation subpopulations were closely related accuracy was greater than when they were distantly related, implying that linkage disequilibrium (LD relationships changed across subpopulations. Across many scenarios involving large training populations, the accuracy of BayesCπ and RR-BLUP did not differ. This empirical study provided evidence regarding the application of GS to hasten the delivery of cultivars through the use of inexpensive and abundant molecular markers available to the public sector.

  16. Proceeding of the 11th meeting on linear accelerators

    International Nuclear Information System (INIS)

    Nakahara, Kazuo; Anami, Shozo; Takasaki, Eiichi

    1986-08-01

    The study group on linear accelerators has attained the period of 10 years. The worldwide change of social structure and economical condition during this period affected also linear accelerators. For a while, the new installation of linear accelerators was limited to Japan and China, and the state of standstill continued in Europe and America. Therefore, the large scale projects of electron-positron collision type accelerators started, and LEP of CERN and HERA of DESY in Europe and Linear Collider of SLAC in USA compete the lead together with TRISTAN in Japan. Large electron rings have become the type connecting CW linear accelerators with electromagnets in circular form unlike the conventional type. The developed type of superconducting CW linacs such as CEBAF in USA is planned. In the large accelerators hereafter of CW or pulse type, the RF system of high accuracy and large power output is the key to the success of projects, instead of individual accelerating spaces, high frequency sources, waveguides or controls. When the scale of projects exceeds a certain limit, those cannot be dealt with merely by the experience and means in the past. In this book, the gists of 62 presented papers and invited lectures are collected. (Kako, I.)

  17. Commissioning and Acceptance Testing of the existing linear accelerator upgraded to volumetric modulated arc therapy

    Science.gov (United States)

    Varadharajan, Ekambaram; Ramasubramanian, Velayudham

    2013-01-01

    Aim The RapidArc commissioning and Acceptance Testing program will test and ensure accuracy in DMLC position, precise dose-rate control during gantry rotation and accurate control of gantry speed. Background Recently, we have upgraded our linear accelerator capable of performing IMRT which was functional from 2007 with image guided RapidArc facility. The installation of VMAT in the existing linear accelerator is a tedious process which requires many quality assurance procedures before the proper commissioning of the facility and these procedures are discussed in this study. Materials and methods Output of the machine at different dose rates was measured to verify its consistency at different dose rates. Monitor and chamber linearity at different dose rates were checked. DMLC QA comprising of MLC transmission factor measurement and dosimetric leaf gap measurements were performed using 0.13 cm3 and 0.65 cm3 Farmer type ionization chamber, dose 1 dosimeter, and IAEA 30 cm × 30 cm × 30 cm water phantom. Picket fence test, garden fence test, tests to check leaf positioning accuracy due to carriage movement, calibration of the leaves, leaf speed stability effects due to the acceleration and deceleration of leaves, accuracy and calibration of leaves in producing complex fields, effects of interleaf friction, etc. were verified using EDR2 therapy films, Vidar scanner, Omnipro accept software, amorphous silicon based electronic portal imaging device and EPIQA software.1–8 Results All the DMLC related quality assurance tests were performed and evaluated by film dosimetry, portal dosimetry and EPIQA.7 Conclusion Results confirmed that the linear accelerator is capable of performing accurate VMAT. PMID:24416566

  18. Commissioning and Acceptance Testing of the existing linear accelerator upgraded to volumetric modulated arc therapy.

    Science.gov (United States)

    Varadharajan, Ekambaram; Ramasubramanian, Velayudham

    2013-01-01

    The RapidArc commissioning and Acceptance Testing program will test and ensure accuracy in DMLC position, precise dose-rate control during gantry rotation and accurate control of gantry speed. Recently, we have upgraded our linear accelerator capable of performing IMRT which was functional from 2007 with image guided RapidArc facility. The installation of VMAT in the existing linear accelerator is a tedious process which requires many quality assurance procedures before the proper commissioning of the facility and these procedures are discussed in this study. Output of the machine at different dose rates was measured to verify its consistency at different dose rates. Monitor and chamber linearity at different dose rates were checked. DMLC QA comprising of MLC transmission factor measurement and dosimetric leaf gap measurements were performed using 0.13 cm(3) and 0.65 cm(3) Farmer type ionization chamber, dose 1 dosimeter, and IAEA 30 cm × 30 cm × 30 cm water phantom. Picket fence test, garden fence test, tests to check leaf positioning accuracy due to carriage movement, calibration of the leaves, leaf speed stability effects due to the acceleration and deceleration of leaves, accuracy and calibration of leaves in producing complex fields, effects of interleaf friction, etc. were verified using EDR2 therapy films, Vidar scanner, Omnipro accept software, amorphous silicon based electronic portal imaging device and EPIQA software.(1-8.) All the DMLC related quality assurance tests were performed and evaluated by film dosimetry, portal dosimetry and EPIQA.(7.) Results confirmed that the linear accelerator is capable of performing accurate VMAT.

  19. Linear and Non-Linear Response of Liquid and Solid Particles to Energetic Radiation

    Science.gov (United States)

    1991-03-11

    but with the beam left within and upon the surface of a spherical particle illuminat - circularly polarized. (The fifth-order corrected, linearly po...specific situation. Figure 1 shows a schematic of the imaging system under consideration. The incident illuminat - ing radiation is generated from a pulsed

  20. Genomic Prediction Accuracy for Resistance Against Piscirickettsia salmonis in Farmed Rainbow Trout

    Directory of Open Access Journals (Sweden)

    Grazyella M. Yoshida

    2018-02-01

    Full Text Available Salmonid rickettsial syndrome (SRS, caused by the intracellular bacterium Piscirickettsia salmonis, is one of the main diseases affecting rainbow trout (Oncorhynchus mykiss farming. To accelerate genetic progress, genomic selection methods can be used as an effective approach to control the disease. The aims of this study were: (i to compare the accuracy of estimated breeding values using pedigree-based best linear unbiased prediction (PBLUP with genomic BLUP (GBLUP, single-step GBLUP (ssGBLUP, Bayes C, and Bayesian Lasso (LASSO; and (ii to test the accuracy of genomic prediction and PBLUP using different marker densities (0.5, 3, 10, 20, and 27 K for resistance against P. salmonis in rainbow trout. Phenotypes were recorded as number of days to death (DD and binary survival (BS from 2416 fish challenged with P. salmonis. A total of 1934 fish were genotyped using a 57 K single-nucleotide polymorphism (SNP array. All genomic prediction methods achieved higher accuracies than PBLUP. The relative increase in accuracy for different genomic models ranged from 28 to 41% for both DD and BS at 27 K SNP. Between different genomic models, the highest relative increase in accuracy was obtained with Bayes C (∼40%, where 3 K SNP was enough to achieve a similar accuracy to that of the 27 K SNP for both traits. For resistance against P. salmonis in rainbow trout, we showed that genomic predictions using GBLUP, ssGBLUP, Bayes C, and LASSO can increase accuracy compared with PBLUP. Moreover, it is possible to use relatively low-density SNP panels for genomic prediction without compromising accuracy predictions for resistance against P. salmonis in rainbow trout.

  1. Improving calibration accuracy in gel dosimetry

    International Nuclear Information System (INIS)

    Oldham, M.; McJury, M.; Webb, S.; Baustert, I.B.; Leach, M.O.

    1998-01-01

    A new method of calibrating gel dosimeters (applicable to both Fricke and polyacrylamide gels) is presented which has intrinsically higher accuracy than current methods, and requires less gel. Two test-tubes of gel (inner diameter 2.5 cm, length 20 cm) are irradiated separately with a 10x10cm 2 field end-on in a water bath, such that the characteristic depth-dose curve is recorded in the gel. The calibration is then determined by fitting the depth-dose measured in water, against the measured change in relaxivity with depth in the gel. Increased accuracy is achieved in this simple depth-dose geometry by averaging the relaxivity at each depth. A large number of calibration data points, each with relatively high accuracy, are obtained. Calibration data over the full range of dose (1.6-10 Gy) is obtained by irradiating one test-tube to 10 Gy at dose maximum (D max ), and the other to 4.5 Gy at D max . The new calibration method is compared with a 'standard method' where five identical test-tubes of gel were irradiated to different known doses between 2 and 10 Gy. The percentage uncertainties in the slope and intercept of the calibration fit are found to be lower with the new method by a factor of about 4 and 10 respectively, when compared with the standard method and with published values. The gel was found to respond linearly within the error bars up to doses of 7 Gy, with a slope of 0.233±0.001 s -1 Gy -1 and an intercept of 1.106±0.005 Gy. For higher doses, nonlinear behaviour was observed. (author)

  2. Diffusion versus linear ballistic accumulation: different models but the same conclusions about psychological processes?

    NARCIS (Netherlands)

    Donkin, C.; Brown, S.; Heathcote, A.; Wagenmakers, E.-J.

    2011-01-01

    Quantitative models for response time and accuracy are increasingly used as tools to draw conclusions about psychological processes. Here we investigate the extent to which these substantive conclusions depend on whether researchers use the Ratcliff diffusion model or the Linear Ballistic

  3. Linear Malignant Melanoma In Situ: Reports and Review of Cutaneous Malignancies Presenting as Linear Skin Cancer.

    Science.gov (United States)

    Cohen, Philip R

    2017-09-18

    Melanomas usually present as oval lesions in which the borders may be irregular. Other morphological features of melanoma include clinical asymmetry, variable color, diameter greater than 6 mm and evolving lesions. Two males whose melanoma in situ presented as linear skin lesions are described and cutaneous malignancies that may appear linear in morphology are summarized in this report. A medical literature search engine, PubMed, was used to search the following terms: cancer, cutaneous, in situ, linear, malignant, malignant melanoma, melanoma in situ, neoplasm, and skin. The 25 papers that were generated by the search and their references, were reviewed; 10 papers were selected for inclusion. The cancer of the skin typically presents as round lesions. However, basal cell carcinoma and squamous cell carcinoma may arise from primary skin conditions or benign skin neoplasms such as linear epidermal nevus and linear porokeratosis. In addition, linear tumors such as basal cell carcinoma can occur. The development of linear cutaneous neoplasms may occur secondary to skin tension line or embryonal growth patterns (as reflected by the lines of Langer and lines of Blaschko) or exogenous factors such as prior radiation therapy. Cutaneous neoplasms and specifically melanoma in situ can be added to the list of linear skin lesions.

  4. Base Isolation for Seismic Retrofitting of a Multiple Building Structure: Evaluation of Equivalent Linearization Method

    Directory of Open Access Journals (Sweden)

    Massimiliano Ferraioli

    2016-01-01

    Full Text Available Although the most commonly used isolation systems exhibit nonlinear inelastic behaviour, the equivalent linear elastic analysis is commonly used in the design and assessment of seismic-isolated structures. The paper investigates if the linear elastic model is suitable for the analysis of a seismically isolated multiple building structure. To this aim, its computed responses were compared with those calculated by nonlinear dynamic analysis. A common base isolation plane connects the isolation bearings supporting the adjacent structures. In this situation, the conventional equivalent linear elastic analysis may have some problems of accuracy because this method is calibrated on single base-isolated structures. Moreover, the torsional characteristics of the combined system are significantly different from those of separate isolated buildings. A number of numerical simulations and parametric studies under earthquake excitations were performed. The accuracy of the dynamic response obtained by the equivalent linear elastic model was calculated by the magnitude of the error with respect to the corresponding response considering the nonlinear behaviour of the isolation system. The maximum displacements at the isolation level, the maximum interstorey drifts, and the peak absolute acceleration were selected as the most important response measures. The influence of mass eccentricity, torsion, and high-modes effects was finally investigated.

  5. Physical-dosimetric enabling a dual linear accelerator 3D planning systems for radiotherapy

    International Nuclear Information System (INIS)

    Alfonso, Rodolfo; Martinez, William; Arelis, Lores; Morales, Jorge

    2009-01-01

    The process of commissioning clinical linear accelerator requires a dual comprehensive study of the therapeutic beam parameters, both photons Electron. All information gained by measuring physical and dosimetric these beams must be analyzed, processed and refined for further modeling in computer-based treatment planning (RTPS). Of professionalism of this process will depend on the accuracy and precision of the calculations the prescribed doses. This paper aims to demonstrate availability clinical linear accelerator system-RTPS with late radiotherapy treatments shaped beam of photons and electrons. (author)

  6. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  7. Diagnostic accuracy of the Eurotest for dementia: a naturalistic, multicenter phase II study

    Directory of Open Access Journals (Sweden)

    Frank Ana

    2006-04-01

    Full Text Available Abstract Background Available screening tests for dementia are of limited usefulness because they are influenced by the patient's culture and educational level. The Eurotest, an instrument based on the knowledge and handling of money, was designed to overcome these limitations. The objective of this study was to evaluate the diagnostic accuracy of the Eurotest in identifying dementia in customary clinical practice. Methods A cross-sectional, multi-center, naturalistic phase II study was conducted. The Eurotest was administered to consecutive patients, older than 60 years, in general neurology clinics. The patients' condition was classified as dementia or no dementia according to DSM-IV diagnostic criteria. We calculated sensitivity (Sn, specificity (Sp and area under the ROC curves (aROC with 95% confidence intervals. The influence of social and educational factors on scores was evaluated with multiple linear regression analysis, and the influence of these factors on diagnostic accuracy was evaluated with logistic regression. Results Sixteen neurologists recruited a total of 516 participants: 101 with dementia, 380 without dementia, and 35 who were excluded. Of the 481 participants who took the Eurotest, 38.7% were totally or functionally illiterate and 45.5% had received no formal education. Mean time needed to administer the test was 8.2+/-2.0 minutes. The best cut-off point was 20/21, with Sn = 0.91 (0.84–0.96, Sp = 0.82 (0.77–0.85, and aROC = 0.93 (0.91–0.95. Neither the scores on the Eurotest nor its diagnostic accuracy were influenced by social or educational factors. Conclusion This naturalistic and pragmatic study shows that the Eurotest is a rapid, simple and useful screening instrument, which is free from educational influences, and has appropriate internal and external validity.

  8. Dosimetric accuracy at low monitor unit setting in electron beams at different dose rates

    International Nuclear Information System (INIS)

    Ravikumar, M.; Ravichandran, R.; Supe, Sanjay S.; Sharma, Anil K.

    1999-01-01

    As electron beam with low monitor unit (LMU) settings are used in some dosimetric studies, better understanding of accuracy in dose delivery at LMU setting is necessary. The dose measurements were carried out with 0.6 cm 3 farmer type ion chamber at d max in a polystyrene phantom. Measurements at different MUs show that the dose linearity ratio (DLR) increases as the MU setting decreases below 20 MU and DLRs are found to increase when the incident electron beams have higher energies. The increase in DLR is minimum for low dose rate setting for all five electron beam energies (6, 9, 12, 16 and 20 MeV). As the variation in dose delivery is machine-specific, a detailed study should be conducted before the low MU setting is implemented. Since errors in dose delivery are high at higher dose rates, low dose rate may be better at low MU unit setting. (author)

  9. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  10. Accuracy Assessment of Different Digital Surface Models

    Directory of Open Access Journals (Sweden)

    Ugur Alganci

    2018-03-01

    Full Text Available Digital elevation models (DEMs, which can occur in the form of digital surface models (DSMs or digital terrain models (DTMs, are widely used as important geospatial information sources for various remote sensing applications, including the precise orthorectification of high-resolution satellite images, 3D spatial analyses, multi-criteria decision support systems, and deformation monitoring. The accuracy of DEMs has direct impacts on specific calculations and process chains; therefore, it is important to select the most appropriate DEM by considering the aim, accuracy requirement, and scale of each study. In this research, DSMs obtained from a variety of satellite sensors were compared to analyze their accuracy and performance. For this purpose, freely available Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER 30 m, Shuttle Radar Topography Mission (SRTM 30 m, and Advanced Land Observing Satellite (ALOS 30 m resolution DSM data were obtained. Additionally, 3 m and 1 m resolution DSMs were produced from tri-stereo images from the SPOT 6 and Pleiades high-resolution (PHR 1A satellites, respectively. Elevation reference data provided by the General Command of Mapping, the national mapping agency of Turkey—produced from 30 cm spatial resolution stereo aerial photos, with a 5 m grid spacing and ±3 m or better overall vertical accuracy at the 90% confidence interval (CI—were used to perform accuracy assessments. Gross errors and water surfaces were removed from the reference DSM. The relative accuracies of the different DSMs were tested using a different number of checkpoints determined by different methods. In the first method, 25 checkpoints were selected from bare lands to evaluate the accuracies of the DSMs on terrain surfaces. In the second method, 1000 randomly selected checkpoints were used to evaluate the methods’ accuracies for the whole study area. In addition to the control point approach, vertical cross

  11. The diagnostic accuracy of integrated positron emission tomography ...

    African Journals Online (AJOL)

    The PET-CT findings, including maximum standardised uptake value (SUVmax), were compared with the gold standard (tissue or microbiological diagnosis). The sensitivity, specificity, positive and negative predictive values and diagnostic accuracy for malignant disease were calculated according to the SUVmax cut-off of ...

  12. [Comparative measurement of urine specific gravity: reagent strips, refractometry and hydrometry].

    Science.gov (United States)

    Costa, Christian Elías; Bettendorff, Carolina; Bupo, Sol; Ayuso, Sandra; Vallejo, Graciela

    2010-06-01

    The urine specific gravity is commonly used in clinical practice to measure the renal concentration/dilution ability. Measurement can be performed by three methods: hydrometry, refractometry and reagent strips. To assess the accuracy of different methods to measure urine specific gravity. We analyzed 156 consecutive urine samples of pediatric patients during April and May 2007. Urine specific gravity was measured by hydrometry (UD), refractometry (RE) and reagent strips (TR), simultaneously. Urine osmolarity was considered as the gold standard and was measured by freezing point depression. Correlation between different methods was calculated by simple linear regression. A positive and acceptable correlation was found with osmolarity for the RE as for the UD (r= 0.81 and r= 0.86, respectively). The reagent strips presented low correlation (r= 0.46). Also, we found good correlation between measurements obtained by UD and RE (r= 0.89). Measurements obtained by TR, however, had bad correlation when compared to UD (r= 0.46). Higher values of specific gravity were observed when measured with RE with respect to UD. Reagent strips are not reliable for measuring urine specific gravity and should not be used as an usual test. However, hydrometry and refractometry are acceptable alternatives for measuring urine specific gravity, as long as the same method is used for follow-up.

  13. Linear thermal expansion data for tuffs from the unsaturated zone at Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Schwartz, B.M.; Chocas, C.S.

    1992-07-01

    Experiment results are presented for linear thermal expansion measurements on tuffaceous rocks from the unsaturated < one at Yucca Mountain, Nevada. Data were obtained both with and without confining pressure. The accuracy of the unconfined data collected between 50 and 250 degrees C is better than 1.8 percent, with the precision better than 4.5;percent. The accuracy of the unconfined data collected between ambient temperature and 50 degrees C and is approximately 11 percent deviation from the true value, with a precision of 12 percent of the mean value. Because of experiment design and the lack of information related calibrations, the accuracy and precision of the confined thermal expansion measurements could not be determined

  14. Geometric accuracy of field alignment in fractionated stereotactic conformal radiotherapy of brain tumors

    International Nuclear Information System (INIS)

    Kortmann, Rolf D.; Becker, Gerd; Perelmouter, Jury; Buchgeister, Markus; Meisner, Christoph; Bamberg, Michael

    1999-01-01

    Purpose: To assess the accuracy of field alignment in patients undergoing three-dimensional (3D) conformal radiotherapy of brain tumors, and to evaluate the impact on the definition of planning target volume and control procedures. Methods and Materials: Geometric accuracy was analyzed in 20 patients undergoing fractionated stereotactic conformal radiotherapy for brain tumors. Rigid head fixation was achieved by using cast material. Transfer of stereotactic coordinates was performed by an external positioning device. The accuracy during treatment planning was quantitatively assessed by using repeated computed tomography (CT) examinations in treatment position (reproducibility of isocenter). Linear discrepancies were measured between treatment plan and CT examination. In addition, for each patient, a series of 20 verifications were taken in orthogonal projections. Linear discrepancies were measured between first and all subsequent verifications (accuracy during treatment delivery). Results: For the total group of patients, the distribution of deviations during treatment setup showed mean values between -0.3-1.2 mm, with standard deviations (SD) of 1.3-2.0 mm. During treatment delivery, the distribution of deviations revealed mean values between 0.7-0.8 mm, with SDs of 0.5-0.6 mm, respectively. For all patients, deviations for the transition to the treatment machine were similar to deviations during subsequent treatment delivery, with 95% of all absolute deviations between less than 2.8 and 4.6 mm. Conclusion: Random fluctuations of field displacements during treatment planning and delivery prevail. Therefore, our quantitative data should be considered when prescribing the safety margins of the planning target volume. Repeated CT examination are useful to detect operator errors and large random or systematic deviations before start of treatment. Control procedures during treatment delivery appear to be of limited importance. In addition, our findings should help to

  15. Accuracy and Consistency of Respiratory Gating in Abdominal Cancer Patients

    International Nuclear Information System (INIS)

    Ge, Jiajia; Santanam, Lakshmi; Yang, Deshan; Parikh, Parag J.

    2013-01-01

    Purpose: To evaluate respiratory gating accuracy and intrafractional consistency for abdominal cancer patients treated with respiratory gated treatment on a regular linear accelerator system. Methods and Materials: Twelve abdominal patients implanted with fiducials were treated with amplitude-based respiratory-gated radiation therapy. On the basis of daily orthogonal fluoroscopy, the operator readjusted the couch position and gating window such that the fiducial was within a setup margin (fiducial-planning target volume [f-PTV]) when RPM indicated “beam-ON.” Fifty-five pre- and post-treatment fluoroscopic movie pairs with synchronized respiratory gating signal were recorded. Fiducial motion traces were extracted from the fluoroscopic movies using a template matching algorithm and correlated with f-PTV by registering the digitally reconstructed radiographs with the fluoroscopic movies. Treatment was determined to be “accurate” if 50% of the fiducial area stayed within f-PTV while beam-ON. For movie pairs that lost gating accuracy, a MATLAB program was used to assess whether the gating window was optimized, the external-internal correlation (EIC) changed, or the patient moved between movies. A series of safety margins from 0.5 mm to 3 mm was added to f-PTV for reassessing gating accuracy. Results: A decrease in gating accuracy was observed in 44% of movie pairs from daily fluoroscopic movies of 12 abdominal patients. Three main causes for inaccurate gating were identified as change of global EIC over time (∼43%), suboptimal gating setup (∼37%), and imperfect EIC within movie (∼13%). Conclusions: Inconsistent respiratory gating accuracy may occur within 1 treatment session even with a daily adjusted gating window. To improve or maintain gating accuracy during treatment, we suggest using at least a 2.5-mm safety margin to account for gating and setup uncertainties

  16. The Difference between Right and Wrong: Accuracy of Older and Younger Adults’ Story Recall

    Science.gov (United States)

    Davis, Danielle K.; Alea, Nicole; Bluck, Susan

    2015-01-01

    Sharing stories is an important social activity in everyday life. This study used fine-grained content analysis to investigate the accuracy of recall of two central story elements: the gist and detail of socially-relevant stories. Younger (M age = 28.06) and older (M age = 75.03) American men and women (N = 63) recalled fictional stories that were coded for (i) accuracy of overall gist and specific gist categories and (ii) accuracy of overall detail and specific detail categories. Findings showed no age group differences in accuracy of overall gist or detail, but differences emerged for specific categories. Older adults more accurately recalled the gist of when the event occurred whereas younger adults more accurately recalled the gist of why the event occurred. These differences were related to episodic memory ability and education. For accuracy in recalling details, there were some age differences, but gender differences were more robust. Overall, women remembered details of these social stories more accurately than men, particularly time and perceptual details. Women were also more likely to accurately remember the gist of when the event occurred. The discussion focuses on how accurate recall of socially-relevant stories is not clearly age-dependent but is related to person characteristics such as gender and episodic memory ability/education. PMID:26404344

  17. The Difference between Right and Wrong: Accuracy of Older and Younger Adults' Story Recall.

    Science.gov (United States)

    Davis, Danielle K; Alea, Nicole; Bluck, Susan

    2015-09-02

    Sharing stories is an important social activity in everyday life. This study used fine-grained content analysis to investigate the accuracy of recall of two central story elements: the gist and detail of socially-relevant stories. Younger (M age = 28.06) and older (M age = 75.03) American men and women (N = 63) recalled fictional stories that were coded for (i) accuracy of overall gist and specific gist categories and (ii) accuracy of overall detail and specific detail categories. Findings showed no age group differences in accuracy of overall gist or detail, but differences emerged for specific categories. Older adults more accurately recalled the gist of when the event occurred whereas younger adults more accurately recalled the gist of why the event occurred. These differences were related to episodic memory ability and education. For accuracy in recalling details, there were some age differences, but gender differences were more robust. Overall, women remembered details of these social stories more accurately than men, particularly time and perceptual details. Women were also more likely to accurately remember the gist of when the event occurred. The discussion focuses on how accurate recall of socially-relevant stories is not clearly age-dependent but is related to person characteristics such as gender and episodic memory ability/education.

  18. The Difference between Right and Wrong: Accuracy of Older and Younger Adults’ Story Recall

    Directory of Open Access Journals (Sweden)

    Danielle K. Davis

    2015-09-01

    Full Text Available Sharing stories is an important social activity in everyday life. This study used fine-grained content analysis to investigate the accuracy of recall of two central story elements: the gist and detail of socially-relevant stories. Younger (M age = 28.06 and older (M age = 75.03 American men and women (N = 63 recalled fictional stories that were coded for (i accuracy of overall gist and specific gist categories and (ii accuracy of overall detail and specific detail categories. Findings showed no age group differences in accuracy of overall gist or detail, but differences emerged for specific categories. Older adults more accurately recalled the gist of when the event occurred whereas younger adults more accurately recalled the gist of why the event occurred. These differences were related to episodic memory ability and education. For accuracy in recalling details, there were some age differences, but gender differences were more robust. Overall, women remembered details of these social stories more accurately than men, particularly time and perceptual details. Women were also more likely to accurately remember the gist of when the event occurred. The discussion focuses on how accurate recall of socially-relevant stories is not clearly age-dependent but is related to person characteristics such as gender and episodic memory ability/education.

  19. Identification of Nonlinear Dynamic Systems Possessing Some Non-linearities

    Directory of Open Access Journals (Sweden)

    Y. N. Pavlov

    2015-01-01

    system of the second-order with nonlinearity of the type "quadratic friction" in combination with nonlinearity of the type "dry friction", was developed a software to simulate a process for providing pseudo experimental data containing random accuracy and to determine the parameters of the system. A conducted computational experiment enabled an estimate of the accuracy with which the proposed algorithm determines the parameters of the system. The illustrative numerical simulation has demonstrated that with using the proposed nonlinear dynamic system identification algorithm in frequency hodograph the accuracy of determining the coefficient values of the frequency transfer function of the second order system with a dry and quadratic friction is comparable with the range of measurement accuracy of experimental samples of this system hodograph. Well-known publications do not mention this identification method of the nonlinear dynamic systems. The nonlinear dynamical systems identification method the article describes can find application when determining parameters of various kinds of actuators. The using method of harmonic linearization and identification of dynamical systems by hodographs is promising for solving the problem of the identification of nonlinear systems with different types of nonlinearities.

  20. A Survey on the Accuracy of Radiovisiography in the Assessment of Interproximal Intrabony Defects

    Directory of Open Access Journals (Sweden)

    A.R. Talaiepour

    2005-03-01

    Full Text Available Statement of problem: Digital measurement of RVG may improve diagnostic interpretation of radiographs in terms of accuracy, although it has been shown that validity of linear measurements of interproximal bone loss could not be improved by basic digital manipulations.Purpose: The aim of this study was to evaluate the accuracy of RadioVisioGraphy (RVG in the linear measurement of interproximal bone loss in intrabony defects.Materials and Methods: Thirty two radiographs of 56 periodontally diseased teeth exhibiting interproximal intrabony defects were obtained by a standardized RVG technique and Intrabony defect depths were determined by linear measurement analysis of RVG. The following four distances were assessed intrasurgically: the cemento enamel junction (CEJ to the alveolar crest, the CEJ to the deepest extention of the bony defect (BD, the occlusal plane to the BD and the OP to the AC. Comparison between RVG measures and intrasurgical estimates were performed using paired t-test.Results: The radiographic measurements overestimated interproximal bone loss as compared to the intrasurgical measurements: CEJ-BD measurement by RVG was 6.803±3.589 mm and intra-surgically was 6.492±3.492 (P<0.000. No statistically significant difference was seen between CEJ and occlusal references in RVG measurements (P<0.729.Conclusion: Radiographic assessment by either the CEJ or occlusal references overestimated bone loss as compared to the intrasurgical gold standard.

  1. Diagnostic accuracy of sonoelastography in different diseases

    Directory of Open Access Journals (Sweden)

    Iqra Manzoor

    2018-03-01

    Full Text Available The objective of this study was to evaluate the diagnostic accuracy of sonoelastography in patients of primary and secondary health care settings. Google scholar, PubMed, Medline, Medscape, Wikipedia and NCBI were searched in October 2017 for all original studies and review articles to identify the relevant material. Two reviewers independently selected articles for evaluation of the diagnostic accuracy of sonoelastography in different diseases based on titles and abstracts retrieved by the literature search. The accuracy of sonoelastography in different diseases was used as the index text, while B-mode sonography, micro pure imaging, surgery and histological findings were used as reference texts. Superficial lymph nodes, neck nodules, malignancy in thyroid nodules, benign and malignant cervical lymph nodes, thyroid nodules, prostate carcinoma, benign and malignant breast abnormalities, liver diseases, parotid and salivary gland masses, pancreatic masses, musculoskeletal diseases and renal disorders were target conditions. The data extracted by the two reviewers concerning selected study characteristics and results were presented in tables and figures. In total, 46 studies were found for breast masses, lymph nodes, prostate carcinoma, liver diseases, salivary and parotid gland diseases, pancreatic masses, musculoskeletal diseases and renal diseases, and the overall sensitivity of sonoelastography in diagnosing all these diseases was 83.14% while specificity was 81.41%. This literature review demonstrates that sonoelastography is characterized by high sensitivity and specificity in diagnosing different disorders of the body.

  2. Linear Model for Optimal Distributed Generation Size Predication

    Directory of Open Access Journals (Sweden)

    Ahmed Al Ameri

    2017-01-01

    Full Text Available This article presents a linear model predicting optimal size of Distributed Generation (DG that addresses the minimum power loss. This method is based fundamentally on strong coupling between active power and voltage angle as well as between reactive power and voltage magnitudes. This paper proposes simplified method to calculate the total power losses in electrical grid for different distributed generation sizes and locations. The method has been implemented and tested on several IEEE bus test systems. The results show that the proposed method is capable of predicting approximate optimal size of DG when compared with precision calculations. The method that linearizes a complex model showed a good result, which can actually reduce processing time required. The acceptable accuracy with less time and memory required can help the grid operator to assess power system integrated within large-scale distribution generation.

  3. Energy expenditure prediction via a footwear-based physical activity monitor: Accuracy and comparison to other devices

    Science.gov (United States)

    Dannecker, Kathryn

    2011-12-01

    Accurately estimating free-living energy expenditure (EE) is important for monitoring or altering energy balance and quantifying levels of physical activity. The use of accelerometers to monitor physical activity and estimate physical activity EE is common in both research and consumer settings. Recent advances in physical activity monitors include the ability to identify specific activities (e.g. stand vs. walk) which has resulted in improved EE estimation accuracy. Recently, a multi-sensor footwear-based physical activity monitor that is capable of achieving 98% activity identification accuracy has been developed. However, no study has compared the EE estimation accuracy for this monitor and compared this accuracy to other similar devices. Purpose . To determine the accuracy of physical activity EE estimation of a footwear-based physical activity monitor that uses an embedded accelerometer and insole pressure sensors and to compare this accuracy against a variety of research and consumer physical activity monitors. Methods. Nineteen adults (10 male, 9 female), mass: 75.14 (17.1) kg, BMI: 25.07(4.6) kg/m2 (mean (SD)), completed a four hour stay in a room calorimeter. Participants wore a footwear-based physical activity monitor, as well as three physical activity monitoring devices used in research: hip-mounted Actical and Actigraph accelerometers and a multi-accelerometer IDEEA device with sensors secured to the limb and chest. In addition, participants wore two consumer devices: Philips DirectLife and Fitbit. Each individual performed a series of randomly assigned and ordered postures/activities including lying, sitting (quietly and using a computer), standing, walking, stepping, cycling, sweeping, as well as a period of self-selected activities. We developed branched (i.e. activity specific) linear regression models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. Results. The shoe

  4. CERN balances linear collider studies

    CERN Multimedia

    ILC Newsline

    2011-01-01

    The forces behind the two most mature proposals for a next-generation collider, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) study, have been steadily coming together, with scientists from both communities sharing ideas and information across the technology divide. In a support of cooperation between the two, CERN in Switzerland, where most CLIC research takes place, recently converted the project-specific position of CLIC Study Leader to the concept-based Linear Collider Study Leader.   The scientist who now holds this position, Steinar Stapnes, is charged with making the linear collider a viable option for CERN’s future, one that could include either CLIC or the ILC. The transition to more involve the ILC must be gradual, he said, and the redefinition of his post is a good start. Though not very much involved with superconducting radiofrequency (SRF) technology, where ILC researchers have made significant advances, CERN participates in many aspect...

  5. Three-dimensional technology for linear morphological studies: a re-examination of cranial variation in four southern African indigenous populations.

    Science.gov (United States)

    Franklin, D; Freedman, L; Milne, N

    2005-01-01

    In order to compare linear dimensions made by traditional anthropometric techniques, and those obtained from three-dimensional coordinates, samples of four indigenous southern African populations were analysed. Linear measurements were obtained using mathematically transformed, three-dimensional landmark data on 207 male crania of Cape Nguni, Natal Nguni, Sotho and Shangaan. Univariate comparisons for accuracy of the transformed linear data were made with those in a traditional linear study by de Villiers (The Skull of the South African Negro: A Biometrical and Morphological Study. Witwatersrand University Press, Johannesburg) on similar samples and equivalent landmarks. Comparisons were not made with her Penrose (Ann Eugenics 18 (1954) 337) analysis as an apparently anomalous 'shape'-'size' statistic was found. The univariate comparisons demonstrated that accurate linear measurements could be derived from three-dimensional data, showing that it is possible to simultaneously obtain data for three-dimensional geometric 'shape' and linear interlandmark analyses. Using Penrose and canonical variates analyses of the transformed three-dimensional interlandmark measurements, similar population distances were found for the four indigenous southern African populations. The inter-population distance relationships took the form of three separated pairs of distances, with the within-pair distances very similar in size. The cranial features of the four populations were found to be overall very similar morphometrically. However the populations were each shown by CVA to have population specific features, and using discriminant analyses 50% or more of the individual crania (with the exception of the Sotho) could be referred to their correct populations.

  6. THE ACCURACY OF Hβ CCD PHOTOMETRY

    Directory of Open Access Journals (Sweden)

    C. Kim

    1994-12-01

    Full Text Available We have undertaken CCD observations of field standard stars with Hβ photometric system to investigate the reliability of Hβ CCD photometry. Flat fielding with dome flat and sky flat for Hβw and Hβn filter was compared with that of B filter in UBV system and, from these, we have not found any difference. It was confirmed that there is a good linear relationship between our Hβ values observed with 2.3m reflector and standard values. However, Hβ values observed with 60cm reflector at Sobaeksan Astronomy Observatory showed very poor relationship. To investigate the accuracy of Hβ CCD photometry for fainter objects, open cluster NGC2437 was observed and reduced with DoPHOT, and the results were compared with those for photoelectric photometry of Stetson (1981.

  7. Linear study of Kelvin-Helmholtz instability for a viscous compressible fluid

    International Nuclear Information System (INIS)

    Hallo, L.; Gauthier, S.

    1992-01-01

    The linear phase of the process leading to a developed turbulence is particularly important for the study of flow stability. A Galerkin spectral method adapted to the study of the mixture layer of one fluid is proposed from a sheared initial velocity profile. An algebraic mapping is developed to improve accuracy near high gradient zone. Validation is obtained by analytic methods for non-viscous flow and multi-domain spectral methods for viscous and compressible flow. Rates of growth are presented for subsonic and slightly supersonic flow. An extension of the method is presented for the study of the linear stability of a mixture with variable concentration and transport properties

  8. Effective connectivity between superior temporal gyrus and Heschl's gyrus during white noise listening: linear versus non-linear models.

    Science.gov (United States)

    Hamid, Ka; Yusoff, An; Rahman, Mza; Mohamad, M; Hamid, Aia

    2012-04-01

    This fMRI study is about modelling the effective connectivity between Heschl's gyrus (HG) and the superior temporal gyrus (STG) in human primary auditory cortices. MATERIALS #ENTITYSTARTX00026; Ten healthy male participants were required to listen to white noise stimuli during functional magnetic resonance imaging (fMRI) scans. Statistical parametric mapping (SPM) was used to generate individual and group brain activation maps. For input region determination, two intrinsic connectivity models comprising bilateral HG and STG were constructed using dynamic causal modelling (DCM). The models were estimated and inferred using DCM while Bayesian Model Selection (BMS) for group studies was used for model comparison and selection. Based on the winning model, six linear and six non-linear causal models were derived and were again estimated, inferred, and compared to obtain a model that best represents the effective connectivity between HG and the STG, balancing accuracy and complexity. Group results indicated significant asymmetrical activation (p(uncorr) Model comparison results showed strong evidence of STG as the input centre. The winning model is preferred by 6 out of 10 participants. The results were supported by BMS results for group studies with the expected posterior probability, r = 0.7830 and exceedance probability, ϕ = 0.9823. One-sample t-tests performed on connection values obtained from the winning model indicated that the valid connections for the winning model are the unidirectional parallel connections from STG to bilateral HG (p model comparison between linear and non-linear models using BMS prefers non-linear connection (r = 0.9160, ϕ = 1.000) from which the connectivity between STG and the ipsi- and contralateral HG is gated by the activity in STG itself. We are able to demonstrate that the effective connectivity between HG and STG while listening to white noise for the respective participants can be explained by a non-linear dynamic causal model with

  9. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  10. Evidence for a confidence-accuracy relationship in memory for same- and cross-race faces.

    Science.gov (United States)

    Nguyen, Thao B; Pezdek, Kathy; Wixted, John T

    2017-12-01

    Discrimination accuracy is usually higher for same- than for cross-race faces, a phenomenon known as the cross-race effect (CRE). According to prior research, the CRE occurs because memories for same- and cross-race faces rely on qualitatively different processes. However, according to a continuous dual-process model of recognition memory, memories that rely on qualitatively different processes do not differ in recognition accuracy when confidence is equated. Thus, although there are differences in overall same- and cross-race discrimination accuracy, confidence-specific accuracy (i.e., recognition accuracy at a particular level of confidence) may not differ. We analysed datasets from four recognition memory studies on same- and cross-race faces to test this hypothesis. Confidence ratings reliably predicted recognition accuracy when performance was above chance levels (Experiments 1, 2, and 3) but not when performance was at chance levels (Experiment 4). Furthermore, at each level of confidence, confidence-specific accuracy for same- and cross-race faces did not significantly differ when overall performance was above chance levels (Experiments 1, 2, and 3) but significantly differed when overall performance was at chance levels (Experiment 4). Thus, under certain conditions, high-confidence same-race and cross-race identifications may be equally reliable.

  11. Alignment of the stanford linear collider Arcs

    International Nuclear Information System (INIS)

    Pitthan, R.; Bell, B.; Friedsam, H.

    1987-01-01

    The alignment of the Arcs for the Stanford Linear Collider at SLAC has posed problems in accelerator survey and alignment not encountered before. These problems come less from the tight tolerances of 0.1 mm, although reaching such a tight statistically defined accuracy in a controlled manner is difficult enough, but from the absence of a common reference plane for the Arcs. Traditional circular accelerators, including HERA and LEP, have been designed in one plane referenced to local gravity. For the SLC Arcs no such single plane exists. Methods and concepts developed to solve these and other problems, connected with the unique design of SLC, range from the first use of satellites for accelerator alignment, use of electronic laser theodolites for placement of components, computer control of the manual adjustment process, complete automation of the data flow incorporating the most advanced concepts of geodesy, strict separation of survey and alignment, to linear principal component analysis for the final statistical smoothing of the mechanical components

  12. Effects of appraisal purpose and rating format on performance appraisal accuracy

    OpenAIRE

    Carter, Marta L.

    1989-01-01

    The principle of encoding specificity states that effective information retrieval relies upon consistency of encoding and retrieval cues. The present study generalized this principle to a complex social interaction in order to investigate the relation between certain combinations of pre- and post-observational cues and their effects on information categorization, recognition accuracy, and judgment accuracy. It was hypothesized that two experimental factors, appraisal pur...

  13. Lattice Boltzmann methods for global linear instability analysis

    Science.gov (United States)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  14. Extending the accuracy of the SNAP interatomic potential form

    Science.gov (United States)

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-06-01

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.

  15. Accuracy in Optical Information Processing

    Science.gov (United States)

    Timucin, Dogan Aslan

    Low computational accuracy is an important obstacle for optical processors which blocks their way to becoming a practical reality and a serious challenger for classical computing paradigms. This research presents a comprehensive solution approach to the problem of accuracy enhancement in discrete analog optical information processing systems. Statistical analysis of a generic three-plane optical processor is carried out first, taking into account the effects of diffraction, interchannel crosstalk, and background radiation. Noise sources included in the analysis are photon, excitation, and emission fluctuations in the source array, transmission and polarization fluctuations in the modulator, and photoelectron, gain, dark, shot, and thermal noise in the detector array. Means and mutual coherence and probability density functions are derived for both optical and electrical output signals. Next, statistical models for a number of popular optoelectronic devices are studied. Specific devices considered here are light-emitting and laser diode sources, an ideal noiseless modulator and a Gaussian random-amplitude-transmittance modulator, p-i-n and avalanche photodiode detectors followed by electronic postprocessing, and ideal free-space geometrical -optics propagation and single-lens imaging systems. Output signal statistics are determined for various interesting device combinations by inserting these models into the general formalism. Finally, based on these special-case output statistics, results on accuracy limitations and enhancement in optical processors are presented. Here, starting with the formulation of the accuracy enhancement problem as (1) an optimal detection problem and (2) as a parameter estimation problem, the potential accuracy improvements achievable via the classical multiple-hypothesis -testing and maximum likelihood and Bayesian parameter estimation methods are demonstrated. Merits of using proper normalizing transforms which can potentially stabilize

  16. An Improved Isotropic Periodic Sum Method That Uses Linear Combinations of Basis Potentials

    KAUST Repository

    Takahashi, Kazuaki Z.; Narumi, Tetsu; Suh, Donguk; Yasuoka, Kenji

    2012-01-01

    Isotropic periodic sum (IPS) is a technique that calculates long-range interactions differently than conventional lattice sum methods. The difference between IPS and lattice sum methods lies in the shape and distribution of remote images for long-range interaction calculations. The images used in lattice sum calculations are identical to those generated from periodic boundary conditions and are discretely positioned at lattice points in space. The images for IPS calculations are "imaginary", which means they do not explicitly exist in a simulation system and are distributed isotropically and periodically around each particle. Two different versions of the original IPS method exist. The IPSn method is applied to calculations for point charges, whereas the IPSp method calculates polar molecules. However, both IPSn and IPSp have their advantages and disadvantages in simulating bulk water or water-vapor interfacial systems. In bulk water systems, the cutoff radius effect of IPSn strongly affects the configuration, whereas IPSp does not provide adequate estimations of water-vapor interfacial systems unless very long cutoff radii are used. To extend the applicability of the IPS technique, an improved IPS method, which has better accuracy in both homogeneous and heterogeneous systems has been developed and named the linear-combination-based isotropic periodic sum (LIPS) method. This improved IPS method uses linear combinations of basis potentials. We performed molecular dynamics (MD) simulations of bulk water and water-vapor interfacial systems to evaluate the accuracy of the LIPS method. For bulk water systems, the LIPS method has better accuracy than IPSn in estimating thermodynamic and configurational properties without the countercharge assumption, which is used for IPSp. For water-vapor interfacial systems, LIPS has better accuracy than IPSp and properly estimates thermodynamic and configurational properties. In conclusion, the LIPS method can successfully estimate

  17. An Improved Isotropic Periodic Sum Method That Uses Linear Combinations of Basis Potentials

    KAUST Repository

    Takahashi, Kazuaki Z.

    2012-11-13

    Isotropic periodic sum (IPS) is a technique that calculates long-range interactions differently than conventional lattice sum methods. The difference between IPS and lattice sum methods lies in the shape and distribution of remote images for long-range interaction calculations. The images used in lattice sum calculations are identical to those generated from periodic boundary conditions and are discretely positioned at lattice points in space. The images for IPS calculations are "imaginary", which means they do not explicitly exist in a simulation system and are distributed isotropically and periodically around each particle. Two different versions of the original IPS method exist. The IPSn method is applied to calculations for point charges, whereas the IPSp method calculates polar molecules. However, both IPSn and IPSp have their advantages and disadvantages in simulating bulk water or water-vapor interfacial systems. In bulk water systems, the cutoff radius effect of IPSn strongly affects the configuration, whereas IPSp does not provide adequate estimations of water-vapor interfacial systems unless very long cutoff radii are used. To extend the applicability of the IPS technique, an improved IPS method, which has better accuracy in both homogeneous and heterogeneous systems has been developed and named the linear-combination-based isotropic periodic sum (LIPS) method. This improved IPS method uses linear combinations of basis potentials. We performed molecular dynamics (MD) simulations of bulk water and water-vapor interfacial systems to evaluate the accuracy of the LIPS method. For bulk water systems, the LIPS method has better accuracy than IPSn in estimating thermodynamic and configurational properties without the countercharge assumption, which is used for IPSp. For water-vapor interfacial systems, LIPS has better accuracy than IPSp and properly estimates thermodynamic and configurational properties. In conclusion, the LIPS method can successfully estimate

  18. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Science.gov (United States)

    Drzewiecki, Wojciech

    2016-12-01

    In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.

  19. Accuracy Constraint Determination in Fixed-Point System Design

    Directory of Open Access Journals (Sweden)

    Serizel R

    2008-01-01

    Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.

  20. Performance assessment of a non-linear eddy-viscosity turbulence model applied to the anisotropic wake flow of a low-pressure turbine blade

    International Nuclear Information System (INIS)

    Vlahostergios, Z.; Sideridis, A.; Yakinthos, K.; Goulas, A.

    2012-01-01

    Highlights: ► We model the wake flow produced by a LPT blade using a non-linear turbulence model. ► We use two interpolation schemes for the convection terms with different accuracy. ► We investigate the effect of each term of the non-linear constitutive expression. ► The results are compared with available experimental measurements. ► The model predicts with a good accuracy the velocity and stress distributions. - Abstract: The wake flow produced by a low-pressure turbine blade is modeled using a non-linear eddy-viscosity turbulence model. The theoretical benefit of using a non-linear eddy-viscosity model is strongly related to the capability of resolving highly anisotropic flows in contrast to the linear turbulence models, which are unable to correctly predict anisotropy. The main aim of the present work is to practically assess the performance of the model, by examining its ability to capture the anisotropic behavior of the wake-flow, mainly focusing on the measured velocity and Reynolds-stress distributions and to provide accurate results for the turbulent kinetic energy balance terms. Additionally, the contribution of each term of its non-linear constitutive expression for the Reynolds stresses is also investigated, in order to examine their direct effect on the modeling of the wake flow. The assessment is based on the experimental measurements that have been carried-out by the same group in Thessaloniki, Sideridis et al. (2011). The computational results show that the non-linear eddy viscosity model is capable to predict, with a good accuracy, all the flow and turbulence parameters while it is easy to program it in a computer code thus meeting the expectations of its originators.

  1. Cascade Structure of Digital Predistorter for Power Amplifier Linearization

    Directory of Open Access Journals (Sweden)

    E. B. Solovyeva

    2015-12-01

    Full Text Available In this paper, a cascade structure of nonlinear digital predistorter (DPD synthesized by the direct learning adaptive algorithm is represented. DPD is used for linearization of power amplifier (PA characteristic, namely for compensation of PA nonlinear distortion. Blocks of the cascade DPD are described by different models: the functional link artificial neural network (FLANN, the polynomial perceptron network (PPN and the radially pruned Volterra model (RPVM. At synthesis of the cascade DPD there is possibility to overcome the ill conditionality problem due to reducing the dimension of DPD nonlinear operator approximation. Results of compensating nonlinear distortion in Wiener–Hammerstein model of PA at the GSM–signal with four carriers are shown. The highest accuracy of PA linearization is produced by the cascade DPD containing PPN and RPVM.

  2. Linear Algebra and Smarandache Linear Algebra

    OpenAIRE

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  3. The number of subjects per variable required in linear regression analyses.

    Science.gov (United States)

    Austin, Peter C; Steyerberg, Ewout W

    2015-06-01

    To determine the number of independent variables that can be included in a linear regression model. We used a series of Monte Carlo simulations to examine the impact of the number of subjects per variable (SPV) on the accuracy of estimated regression coefficients and standard errors, on the empirical coverage of estimated confidence intervals, and on the accuracy of the estimated R(2) of the fitted model. A minimum of approximately two SPV tended to result in estimation of regression coefficients with relative bias of less than 10%. Furthermore, with this minimum number of SPV, the standard errors of the regression coefficients were accurately estimated and estimated confidence intervals had approximately the advertised coverage rates. A much higher number of SPV were necessary to minimize bias in estimating the model R(2), although adjusted R(2) estimates behaved well. The bias in estimating the model R(2) statistic was inversely proportional to the magnitude of the proportion of variation explained by the population regression model. Linear regression models require only two SPV for adequate estimation of regression coefficients, standard errors, and confidence intervals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Ways for improvement of the LIU-5/5000 linear induction accelerator parameters

    International Nuclear Information System (INIS)

    Bobylev, V.I.; Kapchinskij, I.M.; Lapitskij, Yu.Ya.; Plotnikov, V.K.; Chuvilo, I.V.

    1987-01-01

    The reasons of limitaions to increase the beam current and improve the quality of beam in the electron linear induction accelerator LIU-5/5000 are studied. The necessity to increase the voltage in the gaps of the electron gun, increase the diameter of the cathode and aperture of the drift tube, accuracy of axial symmetry electron gun current-carrying elements and accuracy of gun fabrication are shown. Stabilization of beam parameters require a new high voltage modulators. Different versions of the linac modernization with the use of transformers with cores of 430 and 600 mm are studied. Technical possibilities at several versions of high voltage modulators are discussed

  5. Classification of Alzheimer's Disease Based on Eight-Layer Convolutional Neural Network with Leaky Rectified Linear Unit and Max Pooling.

    Science.gov (United States)

    Wang, Shui-Hua; Phillips, Preetha; Sui, Yuxiu; Liu, Bin; Yang, Ming; Cheng, Hong

    2018-03-26

    Alzheimer's disease (AD) is a progressive brain disease. The goal of this study is to provide a new computer-vision based technique to detect it in an efficient way. The brain-imaging data of 98 AD patients and 98 healthy controls was collected using data augmentation method. Then, convolutional neural network (CNN) was used, CNN is the most successful tool in deep learning. An 8-layer CNN was created with optimal structure obtained by experiences. Three activation functions (AFs): sigmoid, rectified linear unit (ReLU), and leaky ReLU. The three pooling-functions were also tested: average pooling, max pooling, and stochastic pooling. The numerical experiments demonstrated that leaky ReLU and max pooling gave the greatest result in terms of performance. It achieved a sensitivity of 97.96%, a specificity of 97.35%, and an accuracy of 97.65%, respectively. In addition, the proposed approach was compared with eight state-of-the-art approaches. The method increased the classification accuracy by approximately 5% compared to state-of-the-art methods.

  6. The Effect of Changing Focal Trough in a Panoramic Device on the Accuracy of Distance Measurements

    Directory of Open Access Journals (Sweden)

    Mehrdad Abdinian

    2018-01-01

    Full Text Available >Introduction: Magnification and distortion are the most important limitations of panoramic radiography. The aim of this study was to determine the effects of changing focal trough option of Planmeca SCARA 3 on the accuracy of linear distance measurements.Materials and Methods: In this in-vitro study, 28 pieces of gutta-percha were attached to the assumptive place of each lost root of an adult dry skull with average size and normal shape. The actual measurements were obtained by a digital caliper. The panoramic images of the skull were taken in six different sizes and shapes of focal trough. This procedure was repeated ten times with new gutta-percha. Paired t-test was used to compare the values of different actual and radiographic images of gutta-percha dimensions.Results: The mean difference [standard deviation (SD] between actual measurement and panoramic radiography in the different groups was from 0.37 (1.1 to 0.58 (2.87 mm. The mean (SD difference of linear measurements between real and radiographic images was 0.52 (0.43 mm in average size, V-shaped group, which was statistically and clinically significant (P = 0.00.Conclusion: Changing the focal trough option of Planmeca SCARA 3 has minimal effects on the accuracy of linear measurements in panoramic radiographs.

  7. Performance test of 100 W linear compressor

    Energy Technology Data Exchange (ETDEWEB)

    Ko, J; Ko, D. Y.; Park, S. J.; Kim, H. B.; Hong, Y. J.; Yeom, H. K. [Korea Institute of Machinery and Materials, Daejeon(Korea, Republic of)

    2013-09-15

    In this paper, we present test results of developed 100 W class linear compressor for Stirling-type pulse tube refrigerator. The fabricated linear compressor has dual-opposed configuration, free piston and moving magnet type linear motor. Power transfer, efficiency and required pressure waveform are predicted with designed and measured specifications. In experiments, room temperature test with flow impedance is conducted to evaluate performance of developed linear compressor. Flow impedance is loaded to compressor with metering valve for flow resistance, inertance tube for flow inertance and buffer volumes for flow compliance. Several operating parameters such as input voltage, current, piston displacement and pressure wave are measured for various operating frequency and fixed input current level. Behaviors of dynamics and performance of linear compressor as varying flow impedance are discussed with measured experimental results. The developed linear compressor shows 124 W of input power, 86 % of motor efficiency and 60 % of compressor efficiency at its resonant operating condition.

  8. Improving serodiagnosis of human and canine leishmaniasis with recombinant Leishmania braziliensis cathepsin l-like protein and a synthetic peptide containing its linear B-cell epitope.

    Directory of Open Access Journals (Sweden)

    Daniel Menezes-Souza

    2015-01-01

    Full Text Available The early and correct diagnosis of human leishmaniasis is essential for disease treatment. Another important step in the control of visceral leishmaniasis is the identification of infected dogs, which are the main domestic reservoir of L. infantum. Recombinant proteins and synthetic peptides based on Leishmania genes have emerged as valuable targets for serodiagnosis due to their increased sensitivity, specificity and potential for standardization. Cathepsin L-like genes are surface antigens that are secreted by amastigotes and have little similarity to host proteins, factors that enable this protein as a good target for serodiagnosis of the leishmaniasis.We mapped a linear B-cell epitope within the Cathepsin L-like protein from L. braziliensis. A synthetic peptide containing the epitope and the recombinant protein was evaluated for serodiagnosis of human tegumentary and visceral leishmaniasis, as well as canine visceral leishmaniasis.The recombinant protein performed best for human tegumentary and canine visceral leishmaniasis, with 96.30% and 89.33% accuracy, respectively. The synthetic peptide was the best to discriminate human visceral leishmaniasis, with 97.14% specificity, 94.55% sensitivity and 96.00% accuracy. Comparison with T. cruzi-infected humans and dogs suggests that the identified epitope is specific to Leishmania parasites, which minimizes the likelihood of cross-reactions.

  9. Development of morphosyntactic accuracy and grammatical complexity in Dutch school-age children with SLI

    NARCIS (Netherlands)

    Zwitserlood, R.L.M.; Weerdenburg, M.W.C. van; Verhoeven, L.T.W.; Wijnen, F.N.K.

    2015-01-01

    Purpose: The purpose of this study was to identify the development of morphosyntactic accuracy and grammatical complexity in Dutch school-age children with specific language impairment (SLI). Method: Morphosyntactic accuracy, the use of dummy auxiliaries, and complex syntax were assessed using a

  10. Development of morphosyntactic accuracy and grammatical complexity in dutch school-age children with SLI

    NARCIS (Netherlands)

    Zwitserlood, Rob; van Weerdenburg, Marjolijn; Verhoeven, Ludo; Wijnen, Frank

    2015-01-01

    Purpose: The purpose of this study was to identify the development of morphosyntactic accuracy and grammatical complexity in Dutch school-age children with specific language impairment (SLI). Method: Morphosyntactic accuracy, the use of dummy auxiliaries, and complex syntax were assessed using a

  11. Accuracy of a heart failure diagnosis in administrative registers

    DEFF Research Database (Denmark)

    Kümler, Thomas; Gislason, Gunnar Hilmar; Kirk, Vibeke

    2008-01-01

    BACKGROUND: The incidence of heart failure is frequently reported using hospital discharge diagnoses. The specificity of a diagnosis has been shown to be high but the sensitivity of a reported diagnosis is unknown. PURPOSE: To study the accuracy of a heart failure diagnosis reported to the Danish...... a specificity of 99% and a sensitivity of 29% for all patients. The positive predictive value was 81%, the negative predictive value 90%. CONCLUSION: The diagnosis of Heart Failure in the Danish National Registers is underreported, but very specific....

  12. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  13. Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned

    Science.gov (United States)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.

  14. Accuracy improvement of irradiation data by combining ground and satellite measurements

    Energy Technology Data Exchange (ETDEWEB)

    Betcke, J. [Energy and Semiconductor Research Laboratory, Carl von Ossietzky University, Oldenburg (Germany); Beyer, H.G. [Department of Electrical Engineering, University of Applied Science (F.H.) Magdeburg-Stendal, Magdeburg (Germany)

    2004-07-01

    Accurate and site-specific irradiation data are essential input for optimal planning, monitoring and operation of solar energy technologies. A concrete example is the performance check of grid connected PV systems with the PVSAT-2 procedure. This procedure detects system faults in an early stage by a daily comparison of an individual reference yield with the actual yield. Calculation of the reference yield requires hourly irradiation data with a known accuracy. A field test of the predecessing PVSAT-1 procedure showed that the accuracy of the irradiation input is the determining factor for the overall accuracy of the yield calculation. In this paper we will investigate if it is possible to improve the accuracy of sitespeci.c irradiation data by combining accurate localised pyranometer data with semi-continuous satellite data.We will therefore introduce the ''Kriging of Differences'' data fusion method. Kriging of Differences also offers the possibility to estimate it's own accuracy. The obtainable accuracy gain and the effectiveness of the accuracy prediction will be investigated by validation on monthly and daily irradiation datasets. Results will be compared with the Heliosat method and interpolation of ground data. (orig.)

  15. Non-linear neutron star oscillations viewed as deviations from an equilibrium state

    International Nuclear Information System (INIS)

    Sperhake, U

    2002-01-01

    A numerical technique is presented which facilitates the evolution of non-linear neutron star oscillations with a high accuracy essentially independent of the oscillation amplitude. We apply this technique to radial neutron star oscillations in a Lagrangian formulation and demonstrate the superior performance of the new scheme compared with 'conventional' techniques. The key feature of our approach is to describe the evolution in terms of deviations from an equilibrium configuration. In contrast to standard perturbation analysis we keep all higher order terms in the evolution equations and thus obtain a fully non-linear description. The advantage of our scheme lies in the elimination of background terms from the equations and the associated numerical errors. The improvements thus achieved will be particularly significant in the study of mildly non-linear effects where the amplitude of the dynamic signal is small compared with the equilibrium values but large enough to warrant non-linear effects. We apply the new technique to the study of non-linear coupling of Eigenmodes and non-linear effects in the oscillations of marginally stable neutron stars. We find non-linear effects in low amplitude oscillations to be particularly pronounced in the range of modes with vanishing frequency which typically mark the onset of instability. (author)

  16. Accuracy of abdominal auscultation for bowel obstruction.

    Science.gov (United States)

    Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge

    2015-09-14

    To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann(®) Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds.

  17. A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System

    Directory of Open Access Journals (Sweden)

    Guanwu Zhou

    2014-07-01

    Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.

  18. Diagnostic accuracy of fine needle aspiration cytology in hepatic tumours

    International Nuclear Information System (INIS)

    Nazir, R.T.; Sharif, M.A.; Iqbal, M.; Amin, M.S.

    2010-01-01

    To determine the diagnostic accuracy of fine-needle aspiration cytology (FNAC) in liver masses to isolate malignant from benign tumours and hepatocellular carcinoma (HCC) from metastatic tumours. Study Design: Cross-sectional, observational. Place and Duration of Study: Department of Histopathology, Combined Military Hospital, Peshawar, from June 2004 to June 2005. Methodology: All the patients with liver masses confirmed by ultrasonography, irrespective of age and gender, were included. Patients with inflammatory lesions were excluded from the study. Selected patients underwent fine-needle aspiration under ultrasound guidance followed by needle biopsy. The cytological slides were stained by haematoxylin and eosin (H and E) stain, while Papanicolaou's stain was employed in selective cases. Needle biopsy fragments were fixed in formalin followed by paraffin embedding and staining with H and E stain. Sensitivity and specificity of FNAC in the diagnosis of liver masses was determined using histological diagnosis on liver biopsy as gold standard. Results: There were one hundred subjects. The mean age at presentation was 55 +- 12 years with male to female ratio of 1.7:1. Cytological diagnosis in 19 cases was benign/non-neoplastic and 81 was malignant. Out of the latter, 49 (60.49%) were HCC and 32 (39.51%) were metastatic tumours on cytology. The overall sensitivity, specificity and accuracy of FNAC in the diagnosis of malignant lesions was 95.2%, 100% and 96% respectively using histological diagnosis on liver biopsy as gold standard. Sensitivity of FNAC to differentiate HCC from metastatic tumours in liver was 96% while specificity was 100% having a diagnostic accuracy of 97.5%. The discrepancy in cyto-histological comparison was mainly seen in well differentiated and poorly-differentiated HCCs. Conclusion: FNAC of the liver masses is a simple, safe, accurate, economical screening test without significant morbidity that can be used to identify the vast majority of

  19. Correlated Levy Noise in Linear Dynamical Systems

    International Nuclear Information System (INIS)

    Srokowski, T.

    2011-01-01

    Linear dynamical systems, driven by a non-white noise which has the Levy distribution, are analysed. Noise is modelled by a specific stochastic process which is defined by the Langevin equation with a linear force and the Levy distributed symmetric white noise. Correlation properties of the process are discussed. The Fokker-Planck equation driven by that noise is solved. Distributions have the Levy shape and their width, for a given time, is smaller than for processes in the white noise limit. Applicability of the adiabatic approximation in the case of the linear force is discussed. (author)

  20. Linear and non-linear Modified Gravity forecasts with future surveys

    Science.gov (United States)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  1. The study of interferometer spectrometer based on DSP and linear CCD

    Science.gov (United States)

    Kang, Hua; Peng, Yuexiang; Xu, Xinchen; Xing, Xiaoqiao

    2010-11-01

    In this paper, general theory of Fourier-transform spectrometer and polarization interferometer is presented. A new design is proposed for Fourier-transform spectrometer based on polarization interferometer with Wollaston prisms and linear CCD. Firstly, measured light is changed into linear polarization light by polarization plate. And then the light can be split into ordinary and extraordinary lights by going through one Wollaston prism. At last, after going through another Wollaston prism and analyzer, interfering fringes can be formed on linear CCD behind the analyzer. The linear CCD is driven by CPLD to output amplitude of interfering fringes and synchronous signals of frames and pixels respectively. DSP is used to collect interference pattern signals from CCD and the digital data of interfering fringes are processed by using 2048-point-FFT. Finally, optical spectrum of measured light can be display on LCD connected to DSP with RS232. The spectrometer will possess the features of firmness, portability and the ability of real-time analyzing. The work will provide a convenient and significant foundation for application of more high accuracy of Fourier-transform spectrometer.

  2. Predictive Accuracy of Exercise Stress Testing the Healthy Adult.

    Science.gov (United States)

    Lamont, Linda S.

    1981-01-01

    Exercise stress testing provides information on the aerobic capacity, heart rate, and blood pressure responses to graded exercises of a healthy adult. The reliability of exercise tests as a diagnostic procedure is discussed in relation to sensitivity and specificity and predictive accuracy. (JN)

  3. Linear programming foundations and extensions

    CERN Document Server

    Vanderbei, Robert J

    2001-01-01

    Linear Programming: Foundations and Extensions is an introduction to the field of optimization. The book emphasizes constrained optimization, beginning with a substantial treatment of linear programming, and proceeding to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. The book is carefully written. Specific examples and concrete algorithms precede more abstract topics. Topics are clearly developed with a large number of numerical examples worked out in detail. Moreover, Linear Programming: Foundations and Extensions underscores the purpose of optimization: to solve practical problems on a computer. Accordingly, the book is coordinated with free efficient C programs that implement the major algorithms studied: -The two-phase simplex method; -The primal-dual simplex method; -The path-following interior-point method; -The homogeneous self-dual methods. In addition, there are online JAVA applets that illustrate various pivot rules and variants of the simplex m...

  4. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    Science.gov (United States)

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  5. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.

    Science.gov (United States)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J

    2017-09-29

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  6. Patient-specific prediction of functional recovery after stroke.

    Science.gov (United States)

    Douiri, Abdel; Grace, Justin; Sarker, Shah-Jalal; Tilling, Kate; McKevitt, Christopher; Wolfe, Charles DA; Rudd, Anthony G

    2017-07-01

    Background and aims Clinical predictive models for stroke recovery could offer the opportunity of targeted early intervention and more specific information for patients and carers. In this study, we developed and validated a patient-specific prognostic model for monitoring recovery after stroke and assessed its clinical utility. Methods Four hundred and ninety-five patients from the population-based South London Stroke Register were included in a substudy between 2002 and 2004. Activities of daily living were assessed using Barthel Index) at one, two, three, four, six, eight, 12, 26, and 52 weeks after stroke. Penalized linear mixed models were developed to predict patients' functional recovery trajectories. An external validation cohort included 1049 newly registered stroke patients between 2005 and 2011. Prediction errors on discrimination and calibration were assessed. The potential clinical utility was evaluated using prognostic accuracy measurements and decision curve analysis. Results Predictive recovery curves showed good accuracy, with root mean squared deviation of 3 Barthel Index points and a R 2 of 83% up to one year after stroke in the external cohort. The negative predictive values of the risk of poor recovery (Barthel Index <8) at three and 12 months were also excellent, 96% (95% CI [93.6-97.4]) and 93% [90.8-95.3], respectively, with a potential clinical utility measured by likelihood ratios (LR+:17 [10.8-26.8] at three months and LR+:11 [6.5-17.2] at 12 months). Decision curve analysis showed an increased clinical benefit, particularly at threshold probabilities of above 5% for predictive risk of poor outcomes. Conclusions A recovery curves tool seems to accurately predict progression of functional recovery in poststroke patients.

  7. Accuracy comparison among different machine learning techniques for detecting malicious codes

    Science.gov (United States)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  8. Accuracy and Radiation Dose of CT-Based Attenuation Correction for Small Animal PET: A Monte Carlo Simulation Study

    International Nuclear Information System (INIS)

    Yang, Ching-Ching; Chan, Kai-Chieh

    2013-06-01

    -Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18 F radioactivity concentration of 1.86x10 5 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The

  9. Effect of Process Parameters on Friction Model in Computer Simulation of Linear Friction Welding

    Directory of Open Access Journals (Sweden)

    A. Yamileva

    2014-07-01

    Full Text Available The friction model is important part of a numerical model of linear friction welding. Its selection determines the accuracy of the results. Existing models employ the classical law of Amonton-Coulomb where the friction coefficient is either constant or linearly dependent on a single parameter. Determination of the coefficient of friction is a time consuming process that requires a lot of experiments. So the feasibility of determinating the complex dependence should be assessing by analysis of effect of approximating law for friction model on simulation results.

  10. Consensus-based reporting standards for diagnostic test accuracy studies for paratuberculosis in ruminants

    DEFF Research Database (Denmark)

    Gardner, Ian A.; Nielsen, Søren Saxmose; Whittington, Richard

    2011-01-01

    The Standards for Reporting of Diagnostic Accuracy (STARD) statement (www.stard-statement.org) was developed to encourage complete and transparent reporting of key elements of test accuracy studies in human medicine. The statement was motivated by widespread evidence of bias in test accuracy...... studies and the finding that incomplete or absent reporting of items in the STARD checklist was associated with overly optimistic estimates of test performance characteristics. Although STARD principles apply broadly, specific guidelines do not exist to account for unique considerations in livestock...... for Reporting of Animal Diagnostic Accuracy Studies for paratuberculosis), should facilitate improved quality of reporting of the design, conduct and results of paratuberculosis test accuracy studies which were identified as “poor” in a review published in 2008 in Veterinary Microbiology...

  11. Fluoroscopically guided automated gun biopsy of chest lesions: diagnostic accuracy and complications

    International Nuclear Information System (INIS)

    Oh, Hee Sul; Han, Young Min; Choi, Ki Chul and others

    1998-01-01

    To determine the diagnostic accuracy and frequency of complications of fluoroscopyguided transthoracic needle biopsy, using an automated gun biopsy system. We retrospectively reviewed 86 patients who underwent automated gun biopsy between October 1995 and October 1996. An 18-gauge cutting needle was used in all cases. All biopsies were performed under fluoroscopic guidance by one interventional radiologist. Tissue sufficient for histologic diagnosis was obtained in 73 of 86 biopsies(84.9%). Fifty-six lesions were malignent and 30 were benign. Sensitivity and diagnostic accuracy for malignant lesions were 87.5% and 87.5%, respectively while cell type specificity in malignant diagnosis was 91.7%(11/12). Sensitivity and diagnostic accuracy for benign lesions were 80.0% and 73.3%, respectively. Postbioptic pneumothorax occurred in three of 86 biopsies(3.5%), one of which required placement of a chest tube. Automated gun biopsy is a simple, safe method for the diagnosis of focal chest lesions. An automated biopsy device offers high diagnostic accuracy in casis involving malignant and benign lesions of the chest, and is particularly useful for determining malignant cell type and specific diagnosis of benign lesions.=20

  12. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  13. Thread sign in biliary intraductal papillary mucinous neoplasm: a novel specific finding for MRI

    International Nuclear Information System (INIS)

    Hong, Gil-Sun; Byun, Jae Ho; Kim, Jin Hee; Kim, Hyoung Jung; Lee, Seung Soo; Lee, Moon-Gyu; Hong, Seung-Mo

    2016-01-01

    To evaluate thread sign of biliary intraductal papillary mucinous neoplasm (B-IPMN) on magnetic resonance imaging (MRI). Thread sign was defined as intraductal linear or curvilinear hypointense striations. Two radiologists independently evaluated the presence and location of thread sign on MR cholangiography (thin-slice, thick-slab and 3D MRC) and axial MR images (T2 TSE, T2 HASTE and DWI) in patients with B-IPMN (n = 38) and in matched control groups with benign (n = 36) or malignant (n = 35) biliary diseases. Sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV) of thread sign for diagnosing B-IPMN were evaluated. Thread sign was observed only in patients with B-IPMN on MRC (44.7-52.6 % [17/38-20/38], P < 0.001) and axial MR images (31.6 % [12/38], P < 0.001), except in one patient with recurrent pyogenic cholangitis on MRC (2.8 %, 1/36). The sensitivity, specificity, accuracy, PPV and NPV of thread sign for diagnosing B-IPMN on MRC were 0.53, 0.99, 0.83, 0.95 and 0.80, respectively (reader 1) and 0.45, 1.0, 0.81, 1.0 and 0.77, respectively (reader 2). Thread sign was detected mainly at the extrahepatic bile duct (52.6 %, 20/38). B-IPMN can manifest thread sign, a novel specific MR finding, mainly at the extrahepatic bile duct on MRI, especially on MRC. (orig.)

  14. Cut elimination in multifocused linear logic

    DEFF Research Database (Denmark)

    Guenot, Nicolas; Brock-Nannestad, Taus

    2015-01-01

    We study cut elimination for a multifocused variant of full linear logic in the sequent calculus. The multifocused normal form of proofs yields problems that do not appear in a standard focused system, related to the constraints in grouping rule instances in focusing phases. We show that cut...... elimination can be performed in a sensible way even though the proof requires some specific lemmas to deal with multifocusing phases, and discuss the difficulties arising with cut elimination when considering normal forms of proofs in linear logic....

  15. Linearly Refined Session Types

    Directory of Open Access Journals (Sweden)

    Pedro Baltazar

    2012-11-01

    Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.

  16. Machine learning improves the accuracy of myocardial perfusion scintigraphy results

    International Nuclear Information System (INIS)

    Groselj, C.; Kukar, M.

    2002-01-01

    Objective: Machine learning (ML) an artificial intelligence method has in last decade proved to be an useful tool in many fields of decision making, also in some fields of medicine. By reports, its decision accuracy usually exceeds the human one. Aim: To assess applicability of ML in interpretation of the stress myocardial perfusion scintigraphy results in coronary artery disease diagnostic process. Patients and methods: The 327 patient's data of planar stress myocardial perfusion scintigraphy were reevaluated in usual way. Comparing them with the results of coronary angiography the sensitivity, specificity and accuracy of the investigation were computed. The data were digitized and the decision procedure repeated by ML program 'Naive Bayesian classifier'. As the ML is able to simultaneously manipulate with whatever number of data, all reachable disease connected data (regarding history, habitus, risk factors, stress results) were added. The sensitivity, specificity and accuracy of scintigraphy were expressed in this way. The results of both decision procedures were compared. Conclusion: Using ML method, 19 more patients out of 327 (5.8%) were correctly diagnosed by stress myocardial perfusion scintigraphy. In this way ML could be an important tool for myocardial perfusion scintigraphy decision making

  17. Diagnostic accuracy of fine needle aspiration cytology in patients ...

    African Journals Online (AJOL)

    Objective: To estimate the sensitivity and specificity of FNAC in detecting malignancy for thyroid disease using histopathology ... thyroid carcinoma being the most predominant type and colloid goiter was the most predominant benign thyroid disease. The sensitivity ..... tic accuracy of fine needle aspiration cytology in thyroid.

  18. Diagnostic Accuracy of the Slump Test for Identifying Neuropathic Pain in the Lower Limb.

    Science.gov (United States)

    Urban, Lawrence M; MacNeil, Brian J

    2015-08-01

    Diagnostic accuracy study with nonconsecutive enrollment. To assess the diagnostic accuracy of the slump test for neuropathic pain (NeP) in those with low to moderate levels of chronic low back pain (LBP), and to determine whether accuracy of the slump test improves by adding anatomical or qualitative pain descriptors. Neuropathic pain has been linked with poor outcomes, likely due to inadequate diagnosis, which precludes treatment specific for NeP. Current diagnostic approaches are time consuming or lack accuracy. A convenience sample of 21 individuals with LBP, with or without radiating leg pain, was recruited. A standardized neurosensory examination was used to determine the reference diagnosis for NeP. Afterward, the slump test was administered to all participants. Reports of pain location and quality produced during the slump test were recorded. The neurosensory examination designated 11 of the 21 participants with LBP/sciatica as having NeP. The slump test displayed high sensitivity (0.91), moderate specificity (0.70), a positive likelihood ratio of 3.03, and a negative likelihood ratio of 0.13. Adding the criterion of pain below the knee significantly increased specificity to 1.00 (positive likelihood ratio = 11.9). Pain-quality descriptors did not improve diagnostic accuracy. The slump test was highly sensitive in identifying NeP within the study sample. Adding a pain-location criterion improved specificity. Combining the diagnostic outcomes was very effective in identifying all those without NeP and half of those with NeP. Limitations arising from the small and narrow spectrum of participants with LBP/sciatica sampled within the study prevent application of the findings to a wider population. Diagnosis, level 4-.

  19. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    Science.gov (United States)

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  20. FIELD ACCURACY TEST OF RPAS PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    P. Barry

    2013-08-01

    Full Text Available Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS. We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This

  1. Prediction of linear B-cell epitopes of hepatitis C virus for vaccine development

    Science.gov (United States)

    2015-01-01

    Background High genetic heterogeneity in the hepatitis C virus (HCV) is the major challenge of the development of an effective vaccine. Existing studies for developing HCV vaccines have mainly focused on T-cell immune response. However, identification of linear B-cell epitopes that can stimulate B-cell response is one of the major tasks of peptide-based vaccine development. Owing to the variability in B-cell epitope length, the prediction of B-cell epitopes is much more complex than that of T-cell epitopes. Furthermore, the motifs of linear B-cell epitopes in different pathogens are quite different (e. g. HCV and hepatitis B virus). To cope with this challenge, this work aims to propose an HCV-customized sequence-based prediction method to identify B-cell epitopes of HCV. Results This work establishes an experimentally verified dataset comprising the B-cell response of HCV dataset consisting of 774 linear B-cell epitopes and 774 non B-cell epitopes from the Immune Epitope Database. An interpretable rule mining system of B-cell epitopes (IRMS-BE) is proposed to select informative physicochemical properties (PCPs) and then extracts several if-then rule-based knowledge for identifying B-cell epitopes. A web server Bcell-HCV was implemented using an SVM with the 34 informative PCPs, which achieved a training accuracy of 79.7% and test accuracy of 70.7% better than the SVM-based methods for identifying B-cell epitopes of HCV and the two general-purpose methods. This work performs advanced analysis of the 34 informative properties, and the results indicate that the most effective property is the alpha-helix structure of epitopes, which influences the connection between host cells and the E2 proteins of HCV. Furthermore, 12 interpretable rules are acquired from top-five PCPs and achieve a sensitivity of 75.6% and specificity of 71.3%. Finally, a conserved promising vaccine candidate, PDREMVLYQE, is identified for inclusion in a vaccine against HCV. Conclusions This work

  2. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG.

    Science.gov (United States)

    Smith, Lauren H; Kuiken, Todd A; Hargrove, Levi J

    2016-04-01

    The objective of this study was to evaluate the ability of linear regression models to decode patterns of muscle coactivation from intramuscular electromyogram (EMG) and provide simultaneous myoelectric control of a virtual 3-DOF wrist/hand system. Performance was compared to the simultaneous control of conventional myoelectric prosthesis methods using intramuscular EMG (parallel dual-site control)-an approach that requires users to independently modulate individual muscles in the residual limb, which can be challenging for amputees. Linear regression control was evaluated in eight able-bodied subjects during a virtual Fitts' law task and was compared to performance of eight subjects using parallel dual-site control. An offline analysis also evaluated how different types of training data affected prediction accuracy of linear regression control. The two control systems demonstrated similar overall performance; however, the linear regression method demonstrated improved performance for targets requiring use of all three DOFs, whereas parallel dual-site control demonstrated improved performance for targets that required use of only one DOF. Subjects using linear regression control could more easily activate multiple DOFs simultaneously, but often experienced unintended movements when trying to isolate individual DOFs. Offline analyses also suggested that the method used to train linear regression systems may influence controllability. Linear regression myoelectric control using intramuscular EMG provided an alternative to parallel dual-site control for 3-DOF simultaneous control at the wrist and hand. The two methods demonstrated different strengths in controllability, highlighting the tradeoff between providing simultaneous control and the ability to isolate individual DOFs when desired.

  3. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  4. Impact of selective genotyping in the training population on accuracy and bias of genomic selection.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Longin, Friedrich H; Würschum, Tobias; Ranc, Nicolas; Reif, Jochen C

    2012-08-01

    Estimating marker effects based on routinely generated phenotypic data of breeding programs is a cost-effective strategy to implement genomic selection. Truncation selection in breeding populations, however, could have a strong impact on the accuracy to predict genomic breeding values. The main objective of our study was to investigate the influence of phenotypic selection on the accuracy and bias of genomic selection. We used experimental data of 788 testcross progenies from an elite maize breeding program. The testcross progenies were evaluated in unreplicated field trials in ten environments and fingerprinted with 857 SNP markers. Random regression best linear unbiased prediction method was used in combination with fivefold cross-validation based on genotypic sampling. We observed a substantial loss in the accuracy to predict genomic breeding values in unidirectional selected populations. In contrast, estimating marker effects based on bidirectional selected populations led to only a marginal decrease in the prediction accuracy of genomic breeding values. We concluded that bidirectional selection is a valuable approach to efficiently implement genomic selection in applied plant breeding programs.

  5. Comparison of linear intrascan and interscan dynamic ranges of Orbitrap and ion-mobility time-of-flight mass spectrometers.

    Science.gov (United States)

    Kaufmann, Anton; Walker, Stephan

    2017-11-30

    The linear intrascan and interscan dynamic ranges of mass spectrometers are important in metabolome and residue analysis. A large linear dynamic range is mandatory if both low- and high-abundance ions have to be detected and quantitated in heavy matrix samples. These performance criteria, as provided by modern high-resolution mass spectrometry (HRMS), were systematically investigated. The comparison included two generations of Orbitraps, and an ion mobility quadrupole time-of-flight (QTOF) system In addition, different scan modes, as provided by the utilized instruments, were investigated. Calibration curves of different compounds covering a concentration range of five orders of magnitude were measured to evaluate the linear interscan dynamic range. The linear intrascan dynamic range and the resulting mass accuracy were evaluated by repeating these measurements in the presence of a very intense background. Modern HRMS instruments can show linear dynamic ranges of five orders of magnitude. Often, however, the linear dynamic range is limited by the detection capability (sensitivity and selectivity) and by the electrospray ionization. Orbitraps, as opposed to TOF instruments, show a reduced intrascan dynamic range. This is due to the limited C-trap and Orbitrap capacity. The tested TOF instrument shows poorer mass accuracies than the Orbitraps. In contrast, hyphenation with an ion-mobility device seems not to affect the linear dynamic range. The linear dynamic range of modern HRMS instrumentation has been significantly improved. This also refers to the virtual absence of systematic mass shifts at high ion abundances. The intrascan dynamic range of the current Orbitrap technology may still be a limitation when analyzing complex matrix extracts. On the other hand, the linear dynamic range is not only limited by the detector technology, but can also be shortened by peripheral devices, where the ionization and transfer of ions take place. Copyright © 2017 John Wiley

  6. Parametric Linear Dynamic Logic

    Directory of Open Access Journals (Sweden)

    Peter Faymonville

    2014-08-01

    Full Text Available We introduce Parametric Linear Dynamic Logic (PLDL, which extends Linear Dynamic Logic (LDL by temporal operators equipped with parameters that bound their scope. LDL was proposed as an extension of Linear Temporal Logic (LTL that is able to express all ω-regular specifications while still maintaining many of LTL's desirable properties like an intuitive syntax and a translation into non-deterministic Büchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all ω-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic Büchi word automata of exponential size via alternating automata. This yields a PSPACE model checking algorithm and a realizability algorithm with doubly-exponential running time. Furthermore, we give tight upper and lower bounds on optimal parameter values for both problems. These results show that PLDL model checking and realizability are not harder than LTL model checking and realizability.

  7. On the accuracy of mode-superposition analysis of linear systems under stochastic agencies

    International Nuclear Information System (INIS)

    Bellomo, M.; Di Paola, M.; La Mendola, L.; Muscolino, G.

    1987-01-01

    This paper deals with the response of linear structures using modal reduction. The MAM (mode acceleration method) correction is extended to stochastic analysis in the stationary case. In this framework the response of the given structure must be described in a probabilistic sense and the spectral moments of the nodal response must be computed in order to obtain a full description of the vibratory stochastic phenomenon. In the deterministic analysis the response is substantially made up of two terms, one of which accounts for the dynamic response due to the lower modes while the second accounts for the contribution due to the higher modes. In stochastic analysis the nodal spectral moments are made up of three terms; the first accounts for the spectral moments of the dynamic response due to the lower modes, the second accounts for the spectral moments of input and the third accounts for the cross-spectral moments between the input and the nodal output. The analysis is applied to a 35-storey building subjected to wind multivariate environments. (orig./HP)

  8. A Dew Point Meter Comprising a Nanoporous Thin Film Alumina Humidity Sensor with a Linearizing Capacitance Measuring Electronics

    Directory of Open Access Journals (Sweden)

    Dilip Kumar Ghara

    2008-02-01

    Full Text Available A novel trace moisture analyzer is presented comprising a capacitive nanoporous film of metal oxide sensor and electronics. The change in capacity of the sensor is due to absorption of water vapor by the pores. A simple capacitance measuring electronics is developed which can detect any change in capacitance and correlates to ambient humidity. The circuit can minimize the parasitic earth capacitance. The non linear response of the sensor is linearized with a micro-controller linearizing circuit. The experimental result shows a resolution of -4°C DP and accuracy within 2%.

  9. Commissioning and quality assurance of the x-ray volume imaging system of an image-guided radiotherapy capable linear accelerator

    International Nuclear Information System (INIS)

    Muralidhar, K.R.; Narayana Murthy, P.; Kumar, Rajneesh

    2008-01-01

    An Image-Guided Radiotherapy-capable linear accelerator (Elekta Synergy) was installed at our hospital, which is equipped with a kV x-ray volume imaging (XVI) system and electronic portal imaging device (iViewGT). The objective of this presentation is to describe the results of commissioning measurements carried out on the XVI facility to verify the manufacturer's specifications and also to evolve a QA schedule which can be used to test its performance routinely. The QA program consists of a series of tests (safety features, geometric accuracy, and image quality). These tests were found to be useful to assess the performance of the XVI system and also proved that XVI system is very suitable for image-guided high-precision radiation therapy. (author)

  10. Commissioning and quality assurance of the X-ray volume Imaging system of an image-guided radiotherapy capable linear accelerator

    Directory of Open Access Journals (Sweden)

    Muralidhar K

    2008-01-01

    Full Text Available An Image-Guided Radiotherapy-capable linear accelerator (Elekta Synergy was installed at our hospital, which is equipped with a kV x-ray volume imaging (XVI system and electronic portal imaging device (iViewGT. The objective of this presentation is to describe the results of commissioning measurements carried out on the XVI facility to verify the manufacturer′s specifications and also to evolve a QA schedule which can be used to test its performance routinely. The QA program consists of a series of tests (safety features, geometric accuracy, and image quality. These tests were found to be useful to assess the performance of the XVI system and also proved that XVI system is very suitable for image-guided high-precision radiation therapy.

  11. Clinical accuracy of point-of-care urine culture in general practice

    DEFF Research Database (Denmark)

    Holm, Anne; Cordoba, Gloria; Sørensen, Tina Møller

    2017-01-01

    OBJECTIVE: To assess the clinical accuracy (sensitivity (SEN), specificity (SPE), positive predictive value and negative predictive value) of two point-of-care (POC) urine culture tests for the identification of urinary tract infection (UTI) in general practice. DESIGN: Prospective diagnostic...... uncomplicated, symptomatic UTI. MAIN OUTCOME MEASURES: (1) Overall accuracy of POC urine culture in general practice. (2) Individual accuracy of each of the two POC tests in this study. (3) Accuracy of POC urine culture in general practice with enterococci excluded, since enterococci are known to multiply...... general practices recruited 341 patients with suspected uncomplicated UTI. The overall agreement between index test and reference was 0.76 (CI: 0.71-0.80), SEN 0.88 (CI: 0.83-0.92) and SPE 0.55 (CI: 0.46-0.64). The two POC tests produced similar results individually. Overall agreement with enterococci...

  12. The accuracy of prediction of genomic selection in elite hybrid rye populations surpasses the accuracy of marker-assisted selection and is equally augmented by multiple field evaluation locations and test years.

    Science.gov (United States)

    Wang, Yu; Mette, Michael Florian; Miedaner, Thomas; Gottwald, Marlen; Wilde, Peer; Reif, Jochen C; Zhao, Yusheng

    2014-07-04

    Marker-assisted selection (MAS) and genomic selection (GS) based on genome-wide marker data provide powerful tools to predict the genotypic value of selection material in plant breeding. However, case-to-case optimization of these approaches is required to achieve maximum accuracy of prediction with reasonable input. Based on extended field evaluation data for grain yield, plant height, starch content and total pentosan content of elite hybrid rye derived from testcrosses involving two bi-parental populations that were genotyped with 1048 molecular markers, we compared the accuracy of prediction of MAS and GS in a cross-validation approach. MAS delivered generally lower and in addition potentially over-estimated accuracies of prediction than GS by ridge regression best linear unbiased prediction (RR-BLUP). The grade of relatedness of the plant material included in the estimation and test sets clearly affected the accuracy of prediction of GS. Within each of the two bi-parental populations, accuracies differed depending on the relatedness of the respective parental lines. Across populations, accuracy increased when both populations contributed to estimation and test set. In contrast, accuracy of prediction based on an estimation set from one population to a test set from the other population was low despite that the two bi-parental segregating populations under scrutiny shared one parental line. Limiting the number of locations or years in field testing reduced the accuracy of prediction of GS equally, supporting the view that to establish robust GS calibration models a sufficient number of test locations is of similar importance as extended testing for more than one year. In hybrid rye, genomic selection is superior to marker-assisted selection. However, it achieves high accuracies of prediction only for selection candidates closely related to the plant material evaluated in field trials, resulting in a rather pessimistic prognosis for distantly related material

  13. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    Energy Technology Data Exchange (ETDEWEB)

    Um, Ki Doo; Lee, Byung Do [Wonkwang University College of Medicine, Iksan (Korea, Republic of)

    2004-03-15

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  14. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    International Nuclear Information System (INIS)

    Um, Ki Doo; Lee, Byung Do

    2004-01-01

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  15. Accuracy of Two Three-Term and Three Two-Term Recurrences for Krylov Space Solvers

    Czech Academy of Sciences Publication Activity Database

    Gutknecht, M. H.; Strakoš, Zdeněk

    2000-01-01

    Roč. 22, č. 1 (2000), s. 213-229 ISSN 0895-4798 R&D Projects: GA ČR GA205/96/0921; GA AV ČR IAA2030706 Institutional research plan: AV0Z1030915 Keywords : linear system of equations * iterative method * Krylov space method * conjugate gradient method * tree-term recurrence * accuracy * roundoff Subject RIV: BA - General Mathematics Impact factor: 1.182, year: 2000

  16. Improving sub-pixel imperviousness change prediction by ensembling heterogeneous non-linear regression models

    Directory of Open Access Journals (Sweden)

    Drzewiecki Wojciech

    2016-12-01

    Full Text Available In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques.

  17. A second-order virtual node algorithm for nearly incompressible linear elasticity in irregular domains

    Science.gov (United States)

    Zhu, Yongning; Wang, Yuting; Hellrung, Jeffrey; Cantarero, Alejandro; Sifakis, Eftychios; Teran, Joseph M.

    2012-08-01

    We present a cut cell method in R2 for enforcing Dirichlet and Neumann boundary conditions with nearly incompressible linear elastic materials in irregular domains. Virtual nodes on cut uniform grid cells are used to provide geometric flexibility in the domain boundary shape without sacrificing accuracy. We use a mixed formulation utilizing a MAC-type staggered grid with piecewise bilinear displacements centered at cell faces and piecewise constant pressures at cell centers. These discretization choices provide the necessary stability in the incompressible limit and the necessary accuracy in cut cells. Numerical experiments suggest second order accuracy in L∞. We target high-resolution problems and present a class of geometric multigrid methods for solving the discrete equations for displacements and pressures that achieves nearly optimal convergence rates independent of grid resolution.

  18. A minimax technique for time-domain design of preset digital equalizers using linear programming

    Science.gov (United States)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  19. Realization of preconditioned Lanczos and conjugate gradient algorithms on optical linear algebra processors.

    Science.gov (United States)

    Ghosh, A

    1988-08-01

    Lanczos and conjugate gradient algorithms are important in computational linear algebra. In this paper, a parallel pipelined realization of these algorithms on a ring of optical linear algebra processors is described. The flow of data is designed to minimize the idle times of the optical multiprocessor and the redundancy of computations. The effects of optical round-off errors on the solutions obtained by the optical Lanczos and conjugate gradient algorithms are analyzed, and it is shown that optical preconditioning can improve the accuracy of these algorithms substantially. Algorithms for optical preconditioning and results of numerical experiments on solving linear systems of equations arising from partial differential equations are discussed. Since the Lanczos algorithm is used mostly with sparse matrices, a folded storage scheme to represent sparse matrices on spatial light modulators is also described.

  20. The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators

    International Nuclear Information System (INIS)

    Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.

    2011-01-01

    Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.

  1. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin; Psencik, Ivan; Cerveny, Vlastislav; Iversen, Einar; Alkhalifah, Tariq Ali

    2013-01-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S' and R' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  2. Two-point paraxial traveltime formula for inhomogeneous isotropic and anisotropic media: Tests of accuracy

    KAUST Repository

    Waheed, Umair bin

    2013-09-01

    On several simple models of isotropic and anisotropic media, we have studied the accuracy of the two-point paraxial traveltime formula designed for the approximate calculation of the traveltime between points S\\' and R\\' located in the vicinity of points S and R on a reference ray. The reference ray may be situated in a 3D inhomogeneous isotropic or anisotropic medium with or without smooth curved interfaces. The twopoint paraxial traveltime formula has the form of the Taylor expansion of the two-point traveltime with respect to spatial Cartesian coordinates up to quadratic terms at points S and R on the reference ray. The constant term and the coefficients of the linear and quadratic terms are determined from quantities obtained from ray tracing and linear dynamic ray tracing along the reference ray. The use of linear dynamic ray tracing allows the evaluation of the quadratic terms in arbitrarily inhomogeneous media and, as shown by examples, it extends the region of accurate results around the reference ray between S and R (and even outside this interval) obtained with the linear terms only. Although the formula may be used for very general 3D models, we concentrated on simple 2D models of smoothly inhomogeneous isotropic and anisotropic (~8% and ~20% anisotropy) media only. On tests, in which we estimated twopoint traveltimes between a shifted source and a system of shifted receivers, we found that the formula may yield more accurate results than the numerical solution of an eikonal-based differential equation. The tests also indicated that the accuracy of the formula depends primarily on the length and the curvature of the reference ray and only weakly depends on anisotropy. The greater is the curvature of the reference ray, the narrower its vicinity, in which the formula yields accurate results.

  3. An automated method for the evaluation of the pointing accuracy of Sun-tracking devices

    Science.gov (United States)

    Baumgartner, Dietmar J.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz; Veronig, Astrid M.; Rieder, Harald E.

    2017-03-01

    The accuracy of solar radiation measurements, for direct (DIR) and diffuse (DIF) radiation, depends significantly on the precision of the operational Sun-tracking device. Thus, rigid targets for instrument performance and operation have been specified for international monitoring networks, e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices that fulfill these accuracy requirements are available from various instrument manufacturers; however, none of the commercially available systems comprise an automatic accuracy control system allowing platform operators to independently validate the pointing accuracy of Sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system-independent, and cost-effective system for evaluating the pointing accuracy of Sun-tracking devices. We detail the monitoring system setup, its design and specifications, and the results from its application to the Sun-tracking system operated at the Kanzelhöhe Observatory (KSO) Austrian radiation monitoring network (ARAD) site. The results from an evaluation campaign from March to June 2015 show that the tracking accuracy of the device operated at KSO lies within BSRN specifications (i.e., 0.1° tracking accuracy) for the vast majority of observations (99.8 %). The evaluation of manufacturer-specified active-tracking accuracies (0.02°), during periods with direct solar radiation exceeding 300 W m-2, shows that these are satisfied in 72.9 % of observations. Tracking accuracies are highest during clear-sky conditions and on days where prevailing clear-sky conditions are interrupted by frontal movement; in these cases, we obtain the complete fulfillment of BSRN requirements and 76.4 % of observations within manufacturer-specified active-tracking accuracies. Limitations to tracking surveillance arise during overcast conditions and

  4. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    Science.gov (United States)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  5. Linear Prediction Using Refined Autocorrelation Function

    Directory of Open Access Journals (Sweden)

    M. Shahidur Rahman

    2007-07-01

    Full Text Available This paper proposes a new technique for improving the performance of linear prediction analysis by utilizing a refined version of the autocorrelation function. Problems in analyzing voiced speech using linear prediction occur often due to the harmonic structure of the excitation source, which causes the autocorrelation function to be an aliased version of that of the vocal tract impulse response. To estimate the vocal tract characteristics accurately, however, the effect of aliasing must be eliminated. In this paper, we employ homomorphic deconvolution technique in the autocorrelation domain to eliminate the aliasing effect occurred due to periodicity. The resulted autocorrelation function of the vocal tract impulse response is found to produce significant improvement in estimating formant frequencies. The accuracy of formant estimation is verified on synthetic vowels for a wide range of pitch frequencies typical for male and female speakers. The validity of the proposed method is also illustrated by inspecting the spectral envelopes of natural speech spoken by high-pitched female speaker. The synthesis filter obtained by the current method is guaranteed to be stable, which makes the method superior to many of its alternatives.

  6. Assessing accuracy of an electronic provincial medication repository

    Directory of Open Access Journals (Sweden)

    Price Morgan

    2012-05-01

    Full Text Available Abstract Background Jurisdictional drug information systems are being implemented in many regions around the world. British Columbia, Canada has had a provincial medication dispensing record, PharmaNet, system since 1995. Little is known about how accurately PharmaNet reflects actual medication usage. Methods This prospective, multi-centre study compared pharmacist collected Best Possible Medication Histories (BPMH to PharmaNet profiles to assess accuracy of the PharmaNet profiles for patients receiving a BPMH as part of clinical care. A review panel examined the anonymized BPMHs and discrepancies to estimate clinical significance of discrepancies. Results 16% of medication profiles were accurate, with 48% of the discrepant profiles considered potentially clinically significant by the clinical review panel. Cardiac medications tended to be more accurate (e.g. ramipril was accurate >90% of the time, while insulin, warfarin, salbutamol and pain relief medications were often inaccurate (80–85% of the time. 1215 sequential BPMHs were collected and reviewed for this study. Conclusions The PharmaNet medication repository has a low accuracy and should be used in conjunction with other sources for medication histories for clinical or research purposes. This finding is consistent with other, smaller medication repository accuracy studies in other jurisdictions. Our study highlights specific medications that tend to be lower in accuracy.

  7. An implicit meshless scheme for the solution of transient non-linear Poisson-type equations

    KAUST Repository

    Bourantas, Georgios

    2013-07-01

    A meshfree point collocation method is used for the numerical simulation of both transient and steady state non-linear Poisson-type partial differential equations. Particular emphasis is placed on the application of the linearization method with special attention to the lagging of coefficients method and the Newton linearization method. The localized form of the Moving Least Squares (MLS) approximation is employed for the construction of the shape functions, in conjunction with the general framework of the point collocation method. Computations are performed for regular nodal distributions, stressing the positivity conditions that make the resulting system stable and convergent. The accuracy and the stability of the proposed scheme are demonstrated through representative and well-established benchmark problems. © 2013 Elsevier Ltd.

  8. An implicit meshless scheme for the solution of transient non-linear Poisson-type equations

    KAUST Repository

    Bourantas, Georgios; Burganos, Vasilis N.

    2013-01-01

    A meshfree point collocation method is used for the numerical simulation of both transient and steady state non-linear Poisson-type partial differential equations. Particular emphasis is placed on the application of the linearization method with special attention to the lagging of coefficients method and the Newton linearization method. The localized form of the Moving Least Squares (MLS) approximation is employed for the construction of the shape functions, in conjunction with the general framework of the point collocation method. Computations are performed for regular nodal distributions, stressing the positivity conditions that make the resulting system stable and convergent. The accuracy and the stability of the proposed scheme are demonstrated through representative and well-established benchmark problems. © 2013 Elsevier Ltd.

  9. Visuo-manual tracking: does intermittent control with aperiodic sampling explain linear power and non-linear remnant without sensorimotor noise?

    Science.gov (United States)

    Gollee, Henrik; Gawthrop, Peter J; Lakie, Martin; Loram, Ian D

    2017-11-01

    A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non-linearly related to the input, attributed to sensorimotor noise. Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200-500 ms periods of irresponsiveness to sensory input making the control process intrinsically non-linear. This evidence calls for re-examination of the extent to which random sensorimotor noise is required to explain the non-linear remnant. This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds. Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non-linear remnant resulting from random sensorimotor noise from multiple sources, and non-linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non-linear remnant using noise or non-linear transformations? (ii) Can non-linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed

  10. Diagnostic accuracy of ultrasound for rotator cuff tears in adults: A systematic review and meta-analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, T.O., E-mail: toby.smith@uea.ac.uk [Department of Physiotherapy, University of East Anglia (United Kingdom); Back, T. [Department of Physiotherapy, Norfolk and Norwich University Hospital, London (United Kingdom); Toms, A.P. [Department of Radiology, Norfolk and Norwich University Hospital, London (United Kingdom); Hing, C.B. [Department of Orthopaedic Surgery, St George' s Hospital, London (United Kingdom)

    2011-11-15

    Aim: To determine the diagnostic accuracy of ultrasound to detect partial and complete thickness rotator cuff tears based on all available clinical trials. Materials and methods: An electronic search of databases registering published and unpublished literature was conducted. All diagnostic accuracy studies that directly compared the accuracy of ultrasound (the index test) to either arthroscopic or open surgical findings (the reference test) for rotator cuff tear were included. The methodological quality of each included study was assessed using the QUADAS form. When appropriate, pooled sensitivity and specificity analysis was conducted, with an assessment of the summary receiver operating characteristic (ROC) curve for each analysis. Results: Sixty-two studies assessing 6007 patients and 6066 shoulders were included. Ultrasonography had good sensitivity and specificity for the assessment of partial thickness (sensitivity 0.84; specificity 0.89), and full-thickness rotator cuff tears (sensitivity 0.96; specificity 0.93). However, the literature poorly described population characteristics, assessor blinding, and was based on limited sample sizes. The literature assessing transducer frequency was particularly small in size. Conclusion: Ultrasonography is an appropriate radiological technique for the assessment of rotator cuff tears with an acceptable sensitivity and specificity. The diagnostic test accuracy of ultrasound is superior for the detection of full-thickness compared to partial-thickness cuff tears. Further study assessing the effect of transducer frequency is warranted.

  11. Diagnostic accuracy of ultrasound for rotator cuff tears in adults: A systematic review and meta-analysis

    International Nuclear Information System (INIS)

    Smith, T.O.; Back, T.; Toms, A.P.; Hing, C.B.

    2011-01-01

    Aim: To determine the diagnostic accuracy of ultrasound to detect partial and complete thickness rotator cuff tears based on all available clinical trials. Materials and methods: An electronic search of databases registering published and unpublished literature was conducted. All diagnostic accuracy studies that directly compared the accuracy of ultrasound (the index test) to either arthroscopic or open surgical findings (the reference test) for rotator cuff tear were included. The methodological quality of each included study was assessed using the QUADAS form. When appropriate, pooled sensitivity and specificity analysis was conducted, with an assessment of the summary receiver operating characteristic (ROC) curve for each analysis. Results: Sixty-two studies assessing 6007 patients and 6066 shoulders were included. Ultrasonography had good sensitivity and specificity for the assessment of partial thickness (sensitivity 0.84; specificity 0.89), and full-thickness rotator cuff tears (sensitivity 0.96; specificity 0.93). However, the literature poorly described population characteristics, assessor blinding, and was based on limited sample sizes. The literature assessing transducer frequency was particularly small in size. Conclusion: Ultrasonography is an appropriate radiological technique for the assessment of rotator cuff tears with an acceptable sensitivity and specificity. The diagnostic test accuracy of ultrasound is superior for the detection of full-thickness compared to partial-thickness cuff tears. Further study assessing the effect of transducer frequency is warranted.

  12. A generic approach for expanding homolog-targeted residue screening of sulfonamides using a fast matrix separation and class-specific fragmentation-dependent acquisition with a hybrid quadrupole-linear ion trap mass spectrometer

    International Nuclear Information System (INIS)

    Huang Chunlin; Guo Bin; Wang Xiaoying; Li Jie; Zhu Weitao; Chen Bo; Ouyang Shan; Yao Shouzhuo

    2012-01-01

    Highlights: ► Generic homolog-targeted screening approach for multi-residual sulfonamide analogs. ► Single-tube extraction/partitioning-multifunction adsorption cleanup for direct injection. ► Class-specific fragmentation for expanding coverage of N 4 -acetyl and N-OH metabolites. ► PreS–IDA–EPI in LC–QqLIT for simultaneous screening and confirmation of real samples. - Abstract: A generic and efficient homolog-targeted approach was used to expand screening and detection of target class of sulfonamides and structural analogs, based on a fast single-tube extraction/partitioning-multifunction adsorption cleanup (SEP/MAC) for class-specific fragmentation-dependent acquisition with a liquid chromatography–hybrid triple-quadrupole linear ion trap mass spectrometer (LC–QqLIT). By combining the two-stage process conducted in a single tube as one-pot protocol, the straightforward SEP/MAC procedure was optimized to offer clean extracts with reasonable recovery (71–109% with RSDs 4 -acetyl and hydroxylamine metabolites plus their possible dimers. Moreover, the PreS-triggered automatically enhanced product ion spectral acquisition enabled simultaneous screening, profiling and confirmation of an unlimited number of analytes belonging to the sulfonamide class within a single analysis. The validation and application results of the generic SEP/MAC-based LC–QqLIT strategy consistently demonstrated favorable performances with acceptable accuracy (67–116%), precision (RSDs −1 ) to meet the acceptance criteria for all the sulfonamide–tissue combinations. Thus, the integration of the matrix-independent SEP/MAC procedure and the multiparameter matching algorithm with the unit-resolution LC–QqLIT instrument can serve as a valuable semi-targeted discovery strategy for rapid screening and reliable quantitative/confirmatory analysis of real samples.

  13. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    International Nuclear Information System (INIS)

    Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo

    2014-01-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting

  14. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)

    2014-07-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.

  15. Evolution of perturbed dynamical systems: analytical computation with time independent accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Gurzadyan, A.V. [Russian-Armenian (Slavonic) University, Department of Mathematics and Mathematical Modelling, Yerevan (Armenia); Kocharyan, A.A. [Monash University, School of Physics and Astronomy, Clayton (Australia)

    2016-12-15

    An analytical method for investigation of the evolution of dynamical systems with independent on time accuracy is developed for perturbed Hamiltonian systems. The error-free estimation using of computer algebra enables the application of the method to complex multi-dimensional Hamiltonian and dissipative systems. It also opens principal opportunities for the qualitative study of chaotic trajectories. The performance of the method is demonstrated on perturbed two-oscillator systems. It can be applied to various non-linear physical and astrophysical systems, e.g. to long-term planetary dynamics. (orig.)

  16. Accuracy of genomic breeding value prediction for intramuscular fat using different genomic relationship matrices in Hanwoo (Korean cattle).

    Science.gov (United States)

    Choi, Taejeong; Lim, Dajeong; Park, Byoungho; Sharma, Aditi; Kim, Jong-Joo; Kim, Sidong; Lee, Seung Hwan

    2017-07-01

    Intramuscular fat is one of the meat quality traits that is considered in the selection strategies for Hanwoo (Korean cattle). Different methods are used to estimate the breeding value of selection candidates. In the present work we focused on accuracy of different genotype relationship matrices as described by forni and pedigree based relationship matrix. The data set included a total of 778 animals that were genotyped for BovineSNP50 BeadChip. Among these 778 animals, 72 animals were sires for 706 reference animals and were used as a validation dataset. Single trait animal model (best linear unbiased prediction and genomic best linear unbiased prediction) was used to estimate the breeding values from genomic and pedigree information. The diagonal elements for the pedigree based coefficients were slightly higher for the genomic relationship matrices (GRM) based coefficients while off diagonal elements were considerably low for GRM based coefficients. The accuracy of breeding value for the pedigree based relationship matrix (A) was 13% while for GRM (GOF, G05, and Yang) it was 0.37, 0.45, and 0.38, respectively. Accuracy of GRM was 1.5 times higher than A in this study. Therefore, genomic information will be more beneficial than pedigree information in the Hanwoo breeding program.

  17. Response moderation models for conditional dependence between response time and response accuracy.

    Science.gov (United States)

    Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan

    2017-05-01

    It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.

  18. Financial Analysts' Forecast Accuracy : Before and After the Introduction of AIFRS

    Directory of Open Access Journals (Sweden)

    Chee Seng Cheong

    2010-09-01

    Full Text Available We examine whether financial analysts’ forecast accuracy differs between the pre- and post- adoption ofAustralian Equivalents to the International Financial Reporting Standards (AIFRS. We find that forecastaccuracy has improved after Australia adopted AIFRS. As a secondary objective, this paper also investigatesthe role of financial analysts in reducing information asymmetry in today’s Australian capital market. We findweak evidence that more analysts following a stock do not help to improve forecast accuracy by bringingmore firm-specific information to the market.

  19. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  20. Assessment of the accuracy of ABC/2 variations in traumatic epidural hematoma volume estimation: a retrospective study

    Directory of Open Access Journals (Sweden)

    Pengfei Yan

    2016-04-01

    Full Text Available Background. The traumatic epidural hematoma (tEDH volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1 and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1 achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05. Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively. Besides, deviations of most cases in the former fell within the range of 20% (90.57% and 96.23, respectively. Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations for tEDH volume estimation. Our initial results showed that the variations y = 0.5 × A1B1C1 (or A2B2C1

  1. Linear and nonlinear dynamic systems in financial time series prediction

    Directory of Open Access Journals (Sweden)

    Salim Lahmiri

    2012-10-01

    Full Text Available Autoregressive moving average (ARMA process and dynamic neural networks namely the nonlinear autoregressive moving average with exogenous inputs (NARX are compared by evaluating their ability to predict financial time series; for instance the S&P500 returns. Two classes of ARMA are considered. The first one is the standard ARMA model which is a linear static system. The second one uses Kalman filter (KF to estimate and predict ARMA coefficients. This model is a linear dynamic system. The forecasting ability of each system is evaluated by means of mean absolute error (MAE and mean absolute deviation (MAD statistics. Simulation results indicate that the ARMA-KF system performs better than the standard ARMA alone. Thus, introducing dynamics into the ARMA process improves the forecasting accuracy. In addition, the ARMA-KF outperformed the NARX. This result may suggest that the linear component found in the S&P500 return series is more dominant than the nonlinear part. In sum, we conclude that introducing dynamics into the ARMA process provides an effective system for S&P500 time series prediction.

  2. Shape Accuracy of Iron Precision Castings in Terms of Ceramic Moulds Physical Properties Anisotropy

    Directory of Open Access Journals (Sweden)

    Biernacki R.

    2014-03-01

    Full Text Available While analyzing shape accuracy of ferroalloy precision castings in terms of ceramic moulds physical anisotropy, low-alloy steel castings ("cover" and cast iron ("plate" were included. The basic parameters in addition to the product linear shape accuracy are flatness deviations, especially due to the expanded flat surface which is cast plate. For mentioned castings surface micro-geometry analysis was also carried, favoring surface load capacity tp50 for Rmax = 50%. Surface load capacity tp50 obtained for the cast cover was compared with machined product, and casting plate surface was compared with wear part of the conveyor belt. The results were referred to anisotropy of ceramic moulds physical properties, which was evaluated by studying ceramic moulds samples in computer tomography equipment Metrotom 800

  3. Algorithm 589. SICEDR: a FORTRAN subroutine for improving the accuracy of computed matrix eigenvalues

    International Nuclear Information System (INIS)

    Dongarra, J.J.

    1982-01-01

    SICEDR is a FORTRAN subroutine for improving the accuracy of a computed real eigenvalue and improving or computing the associated eigenvector. It is first used to generate information during the determination of the eigenvalues by the Schur decomposition technique. In particular, the Schur decomposition technique results in an orthogonal matrix Q and an upper quasi-triangular matrix T, such that A = QTQ/sup T/. Matrices A, Q, and T and the approximate eigenvalue, say lambda, are then used in the improvement phase. SICEDR uses an iterative method similar to iterative improvement for linear systems to improve the accuracy of lambda and improve or compute the eigenvector x in O(n 2 ) work, where n is the order of the matrix A

  4. Local correlation detection with linearity enhancement in streaming data

    KAUST Repository

    Xie, Qing

    2013-01-01

    This paper addresses the challenges in detecting the potential correlation between numerical data streams, which facilitates the research of data stream mining and pattern discovery. We focus on local correlation with delay, which may occur in burst at different time in different streams, and last for a limited period. The uncertainty on the correlation occurrence and the time delay make it diff cult to monitor the correlation online. Furthermore, the conventional correlation measure lacks the ability of ref ecting visual linearity, which is more desirable in reality. This paper proposes effective methods to continuously detect the correlation between data streams. Our approach is based on the Discrete Fourier Transform to make rapid cross-correlation calculation with time delay allowed. In addition, we introduce a shape-based similarity measure into the framework, which ref nes the results by representative trend patterns to enhance the signif cance of linearity. The similarity of proposed linear representations can quickly estimate the correlation, and the window sliding strategy in segment level improves the eff ciency for online detection. The empirical study demonstrates the accuracy of our detection approach, as well as more than 30% improvement of eff ciency. Copyright 2013 ACM.

  5. Privacy-Preserving Distributed Linear Regression on High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Gascón Adrià

    2017-10-01

    Full Text Available We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao’s garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.’s method for privacy-preserving ridge regression (S&P 2013, and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.

  6. Administrative database code accuracy did not vary notably with changes in disease prevalence.

    Science.gov (United States)

    van Walraven, Carl; English, Shane; Austin, Peter C

    2016-11-01

    Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    Science.gov (United States)

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  8. A non-linear algorithm for current signal filtering and peak detection in SiPM

    International Nuclear Information System (INIS)

    Putignano, M; Intermite, A; Welsch, C P

    2012-01-01

    Read-out of Silicon Photomultipliers is commonly achieved by means of charge integration, a method particularly susceptible to after-pulsing noise and not efficient for low level light signals. Current signal monitoring, characterized by easier electronic implementation and intrinsically faster than charge integration, is also more suitable for low level light signals and can potentially result in much decreased after-pulsing noise effects. However, its use is to date limited by the need of developing a suitable read-out algorithm for signal analysis and filtering able to achieve current peak detection and measurement with the needed precision and accuracy. In this paper we present an original algorithm, based on a piecewise linear-fitting approach, to filter the noise of the current signal and hence efficiently identifying and measuring current peaks. The proposed algorithm is then compared with the optimal linear filtering algorithm for time-encoded peak detection, based on a moving average routine, and assessed in terms of accuracy, precision, and peak detection efficiency, demonstrating improvements of 1÷2 orders of magnitude in all these quality factors.

  9. The impact of improved sparse linear solvers on industrial engineering applications

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, M. [Cray Research, Inc., Eagan, MN (United States); Baddourah, M.; Poole, E.L.; Yang, Chao Wu

    1996-12-31

    There are usually many factors that ultimately determine the quality of computer simulation for engineering applications. Some of the most important are the quality of the analytical model and approximation scheme, the accuracy of the input data and the capability of the computing resources. However, in many engineering applications the characteristics of the sparse linear solver are the key factors in determining how complex a problem a given application code can solve. Therefore, the advent of a dramatically improved solver often brings with it dramatic improvements in our ability to do accurate and cost effective computer simulations. In this presentation we discuss the current status of sparse iterative and direct solvers in several key industrial CFD and structures codes, and show the impact that recent advances in linear solvers have made on both our ability to perform challenging simulations and the cost of those simulations. We also present some of the current challenges we have and the constraints we face in trying to improve these solvers. Finally, we discuss future requirements for sparse linear solvers on high performance architectures and try to indicate the opportunities that exist if we can develop even more improvements in linear solver capabilities.

  10. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  11. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  12. High Precision Survey and Alignment of Large Linear Accelerators

    CERN Document Server

    Prenting, J

    2004-01-01

    For the future linear accelerator TESLA the demanded accuracy for the alignment of the components is 0.5 mm horizontal and 0.2 mm vertical, both on each 600 m section. Other accelerators require similar accuracies. These demands can not be fulfilled with open-air geodetic methods, mainly because of refraction. Therefore the RTRS (Rapid Tunnel Reference Surveyor), a measurement train performing overlapping multipoint alignment on a reference network is being developed. Two refraction-free realizations of this concept are being developed at the moment: the first one (GeLiS) measures the horizontal co-ordinates using stretched wires, combined with photogrammetric split-image sensors in a distance measurement configuration. In areas of the tunnel where the accelerator is following the earth curvature GeLiS measures the height using a new hydrostatic leveling system. The second concept (LiCAS) is based on laser straightness monitors (LSM) combined with frequency scanning interferometry (FSI) in an evacuated system...

  13. Accuracy of computer-assisted orthognathic surgery.

    Science.gov (United States)

    De Riu, Giacomo; Virdis, Paola Ilaria; Meloni, Silvio Mario; Lumbau, Aurea; Vaira, Luigi Angelo

    2018-02-01

    The purpose of this study was to retrospectively evaluate the difference between the planned and the actual movements of the jaws, using three-dimensional (3D) software for PC-assisted orthognathic surgery, to establish the accuracy of the procedure. A retrospective study was performed with 49 patients who had undergone PC-guided bimaxillary surgery. The accuracy of the protocol was determined by comparing planned movements of the jaws with the actual surgical movements, analysing frontal and lateral cephalometries. The overall results were deemed accurate, and differences among 12 of the 15 parameters were considered nonsignificant. Significant differences were reported for SNA (p = 0.008), SNB (p = 0.006), and anterior facial height (p = 0.033). The latter was significantly different in patients who had undergone genioplasty when compared with patients who had not. Virtual surgical planning presented a good degree of accuracy for most of the parameters assessed, with an average error of 1.98 mm for linear measures and 1.19° for angular measures. In general, a tendency towards under-projection in jaws was detected, probably due to imperfect condylar seating. A slight overcorrection of SNA and SNB during virtual planning (approximately 2°) could be beneficial. Further progress is required in the development of 3D simulation of the soft tissue, which currently does not allow an accurate management of the facial height and the chin position. Virtual planning cannot replace the need for constant intraoperative monitoring of the jaws' movements and real-time comparisons between planned and actual outcomes. It is therefore appropriate to leave some margin for correction of inaccuracies in the virtual planning. In this sense, it may be appropriate to use only the intermediate splint, and then use the planned occlusion and clinical measurements to guide repositioning of the second jaw and chin, respectively. Copyright © 2017 European Association for Cranio

  14. Theoretical studies of MHD plasma molecules. I. Potential energy curves and dipole moments of linear KOH

    International Nuclear Information System (INIS)

    England, W.B.

    1978-01-01

    Uncorrelated and correlated potential energy curves and dipole moments are reported for linear KOH. The compound is found to be ionic, K + OH - . Minimum energy bond lengths are R/sub KO/=4.2913 au and R/sub OH/=1.7688 au, with an estimated accuracy of 2%. The corresponding dipole moment is 3.3 au (8.46 D) with a similar accuracy estimate. This is to our knowledge the first value ever reported for the KOH dipole moment, and the large value suggests that KOH will be an effective electron scatterer in MHD plasmas

  15. Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication.

    Science.gov (United States)

    Naveed, Mariam; Clary, Meredith; Ahn, Chul; Kubiliun, Nisa; Agrawal, Deepak; Cryer, Byron; Murphy, Caitlin; Singal, Amit G

    2017-07-01

    Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P 90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P =.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. Copyright © 2017 by the National Comprehensive Cancer Network.

  16. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation......An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...

  17. The ripple electromagnetic calculation: accuracy demand and possible responses

    International Nuclear Information System (INIS)

    Cocilovo, V.; Ramogida, G.; Formisano, A.; Martone, R.; Portone, A.; Roccella, M.; Roccella, R.

    2006-01-01

    Due to a number of causes (the finite number of toroidal field coils or the presence of concentrate blocks of magnetic materials, as the neutral beam shielding) the actual magnetic configuration in a Tokamak differs from the desired one. For example, a ripple is added to the ideal axisymmetric toroidal field, impacting the equilibrium and stability of the plasma column; as a further example the magnetic field out of plasma affects the operation of a number of critical components, included the diagnostic system and the neutral beam. Therefore the actual magnetic field has to be suitably calculated and his shape controlled within the required limits. Due to the complexity of its design, the problem is quite critical for the ITER project. In this paper the problem is discussed both from mathematical and numerical point of view. In particular, a complete formulation is proposed, taking into account both the presence of the non linear magnetic materials and the fully 3D geometry. Then the quality level requirements are discussed, included the accuracy of calculations and the spatial resolution. As a consequence, the numerical tools able to fulfil the quality needs while requiring reasonable computer burden are considered. In particular possible tools based on numerical FEM scheme are considered; in addition, in spite of the presence of non linear materials, the practical possibility to use Biot-Savart based approaches, as cross check tools, is also discussed. The paper also analyses the possible geometrical simplifications of the geometry able to make possible the actual calculation while guarantying the required accuracy. Finally the characteristics required for a correction system able to effectively counteract the magnetic field degradation are presented. Of course a number of examples will be also reported and commented. (author)

  18. Validation and diagnostic accuracy of predictive curves for age-associated longitudinal cognitive decline in older adults

    Science.gov (United States)

    Bernier, Patrick J.; Gourdeau, Christian; Carmichael, Pierre-Hugues; Beauchemin, Jean-Pierre; Verreault, René; Bouchard, Rémi W.; Kröger, Edeltraut; Laforce, Robert

    2017-01-01

    BACKGROUND: The Mini-Mental State Examination continues to be used frequently to screen for cognitive impairment in older adults, but it remains unclear how to interpret changes in its score over time to distinguish age-associated cognitive decline from an early degenerative process. We aimed to generate cognitive charts for use in clinical practice for longitudinal evaluation of age-associated cognitive decline. METHODS: We used data from the Canadian Study of Health and Aging from 7569 participants aged 65 years or older who completed a Mini-Mental State Examination at baseline, and at 5 and 10 years later to develop a linear regression model for the Mini-Mental State Examination score as a function of age and education. Based on this model, we generated cognitive charts designed to optimize accuracy for distinguishing participants with dementia from healthy controls. We validated our model using a separate data set of 6501 participants from the National Alzheimer’s Coordinating Center’s Uniform Data Set. RESULTS: For baseline measurement, the cognitive charts had a sensitivity of 80% (95% confidence interval [CI] 75% to 84%) and a specificity of 89% (95% CI 88% to 90%) for distinguishing healthy controls from participants with dementia. Similar sensitivities and specificities were observed for a decline over time greater than 1 percentile zone from the first measurement. Results in the validation sample were comparable, albeit with lower sensitivities. Negative predictive value was 99%. INTERPRETATION: Our innovative model, which factors in age and education, showed validity and diagnostic accuracy for determining whether older patients show abnormal performance on serial Mini-Mental State Examination measurements. Similar to growth curves used in pediatrics, cognitive charts allow longitudinal cognitive evaluation and enable prompt initiation of investigation and treatment when appropriate. PMID:29203616

  19. Measurement and reproduction accuracy of computer-controlled grand pianos

    Science.gov (United States)

    Goebl, Werner; Bresin, Roberto

    2003-10-01

    The recording and reproducing capabilities of a Yamaha Disklavier grand piano and a Bösendorfer SE290 computer-controlled grand piano were tested, with the goal of examining their reliability for performance research. An experimental setup consisting of accelerometers and a calibrated microphone was used to capture key and hammer movements, as well as the acoustic signal. Five selected keys were played by pianists with two types of touch (``staccato'' and ``legato''). Timing and dynamic differences between the original performance, the corresponding MIDI file recorded by the computer-controlled pianos, and its reproduction were analyzed. The two devices performed quite differently with respect to timing and dynamic accuracy. The Disklavier's onset capturing was slightly more precise (+/-10 ms) than its reproduction (-20 to +30 ms); the Bösendorfer performed generally better, but its timing accuracy was slightly less precise for recording (-10 to 3 ms) than for reproduction (+/-2 ms). Both devices exhibited a systematic (linear) error in recording over time. In the dynamic dimension, the Bösendorfer showed higher consistency over the whole dynamic range, while the Disklavier performed well only in a wide middle range. Neither device was able to capture or reproduce different types of touch.

  20. Estimating thermal diffusivity and specific heat from needle probe thermal conductivity data

    Science.gov (United States)

    Waite, W.F.; Gilbert, L.Y.; Winters, W.J.; Mason, D.H.

    2006-01-01

    Thermal diffusivity and specific heat can be estimated from thermal conductivity measurements made using a standard needle probe and a suitably high data acquisition rate. Thermal properties are calculated from the measured temperature change in a sample subjected to heating by a needle probe. Accurate thermal conductivity measurements are obtained from a linear fit to many tens or hundreds of temperature change data points. In contrast, thermal diffusivity calculations require a nonlinear fit to the measured temperature change occurring in the first few tenths of a second of the measurement, resulting in a lower accuracy than that obtained for thermal conductivity. Specific heat is calculated from the ratio of thermal conductivity to diffusivity, and thus can have an uncertainty no better than that of the diffusivity estimate. Our thermal conductivity measurements of ice Ih and of tetrahydrofuran (THF) hydrate, made using a 1.6 mm outer diameter needle probe and a data acquisition rate of 18.2 pointss, agree with published results. Our thermal diffusivity and specific heat results reproduce published results within 25% for ice Ih and 3% for THF hydrate. ?? 2006 American Institute of Physics.

  1. Alignment and vibration issues in TeV linear collider design

    International Nuclear Information System (INIS)

    Fischer, G.E.

    1989-07-01

    The next generation of linear colliders will require alignment accuracies and stabilities of component placement at least one, perhaps two, orders of magnitude better than can be achieved by the conventional methods and procedures in practice today. The magnitudes of these component-placement tolerances for current designs of various linear collider subsystems are tabulated. In the micron range, long-term ground motion is sufficiently rapid that on-line reference and mechanical correction systems are called for. Some recent experiences with the upgraded SLAC laser alignment systems and examples of some conceivable solutions for the future are described. The so called ''girder'' problem is discussed in the light of ambient and vibratory disturbances. The importance of the quality of the underlying geology is stressed. The necessity and limitations of public-beam-derived placement information are mentioned. 40 refs., 4 figs., 1 tab

  2. Alighment and Vibration Issues in TeV Linear Collider Design

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, G.E.; /SLAC

    2005-08-12

    The next generation of linear colliders will require alignment accuracies and stabilities of component placement at least one, perhaps two, orders of magnitude better than can be achieved by the conventional methods and procedures in practice today. The magnitudes of these component-placement tolerances for current designs of various linear collider subsystems are tabulated. In the micron range, long-term ground motion is sufficiently rapid that on-line reference and mechanical correction systems are called for. Some recent experiences with the upgraded SLAC laser alignment systems and examples of some conceivable solutions for the future are described. The so called ''girder'' problem is discussed in the light of ambient and vibratory disturbances. The importance of the quality of the underlying geology is stressed. The necessity and limitations of particle-beam-derived placement information are mentioned.

  3. Estimation of lung motion fields in 4D CT data by variational non-linear intensity-based registration: A comparison and evaluation study

    International Nuclear Information System (INIS)

    Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz; Ehrhardt, Jan

    2014-01-01

    Accurate and robust estimation of motion fields in respiration-correlated CT (4D CT) images, usually performed by non-linear registration of the temporal CT frames, is a precondition for the analysis of patient-specific breathing dynamics and subsequent image-supported diagnostics and treatment planning. In this work, we present a comprehensive comparison and evaluation study of non-linear registration variants applied to the task of lung motion estimation in thoracic 4D CT data. In contrast to existing multi-institutional comparison studies (e.g. MIDRAS and EMPIRE10), we focus on the specific but common class of variational intensity-based non-parametric registration and analyze the impact of the different main building blocks of the underlying optimization problem: the distance measure to be minimized, the regularization approach and the transformation space considered during optimization. In total, 90 different combinations of building block instances are compared. Evaluated on proprietary and publicly accessible 4D CT images, landmark-based registration errors (TRE) between 1.14 and 1.20 mm for the most accurate registration variants demonstrate competitive performance of the applied general registration framework compared to other state-of-the-art approaches for lung CT registration. Although some specific trends can be observed, effects of interchanging individual instances of the building blocks on the TRE are in general rather small (no single outstanding registration variant existing); the same level of accuracy is, however, associated with significantly different degrees of motion field smoothness and computational demands. Consequently, the building block combination of choice will depend on application-specific requirements on motion field characteristics. (paper)

  4. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  5. Linearization of the Principal Component Analysis method for radiative transfer acceleration: Application to retrieval algorithms and sensitivity studies

    International Nuclear Information System (INIS)

    Spurr, R.; Natraj, V.; Lerot, C.; Van Roozendael, M.; Loyola, D.

    2013-01-01

    Principal Component Analysis (PCA) is a promising tool for enhancing radiative transfer (RT) performance. When applied to binned optical property data sets, PCA exploits redundancy in the optical data, and restricts the number of full multiple-scatter calculations to those optical states corresponding to the most important principal components, yet still maintaining high accuracy in the radiance approximations. We show that the entire PCA RT enhancement process is analytically differentiable with respect to any atmospheric or surface parameter, thus allowing for accurate and fast approximations of Jacobian matrices, in addition to radiances. This linearization greatly extends the power and scope of the PCA method to many remote sensing retrieval applications and sensitivity studies. In the first example, we examine accuracy for PCA-derived UV-backscatter radiance and Jacobian fields over a 290–340 nm window. In a second application, we show that performance for UV-based total ozone column retrieval is considerably improved without compromising the accuracy. -- Highlights: •Principal Component Analysis (PCA) of spectrally-binned atmospheric optical properties. •PCA-based accelerated radiative transfer with 2-stream model for fast multiple-scatter. •Atmospheric and surface property linearization of this PCA performance enhancement. •Accuracy of PCA enhancement for radiances and bulk-property Jacobians, 290–340 nm. •Application of PCA speed enhancement to UV backscatter total ozone retrievals

  6. Linear versus non-linear supersymmetry, in general

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2016-04-12

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  7. Linear versus non-linear supersymmetry, in general

    International Nuclear Information System (INIS)

    Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm

    2016-01-01

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  8. Analytical Ballistic Trajectories with Approximately Linear Drag

    Directory of Open Access Journals (Sweden)

    Giliam J. P. de Carpentier

    2014-01-01

    Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.

  9. Increasing Free Throw Accuracy through Behavior Modeling and Goal Setting.

    Science.gov (United States)

    Erffmeyer, Elizabeth S.

    A two-year behavior-modeling training program focusing on attention processes, retention processes, motor reproduction, and motivation processes was implemented to increase the accuracy of free throw shooting for a varsity intercollegiate women's basketball team. The training included specific learning keys, progressive relaxation, mental…

  10. Early clinical esophageal adenocarcinoma (cT1): Utility of CT in regional nodal metastasis detection and can the clinical accuracy be improved?

    Energy Technology Data Exchange (ETDEWEB)

    Betancourt Cuellar, Sonia L., E-mail: slbetancourt@mdanderson.org; Sabloff, Bradley, E-mail: bsabloff@mdanderson.org; Carter, Brett W., E-mail: bcarter2@mdanderson.org; Benveniste, Marcelo F., E-mail: mfbenveniste@mdanderson.org; Correa, Arlene M., E-mail: amcorrea@mdanderson.org; Maru, Dipen M., E-mail: dmaru@mdanderson.org; Ajani, Jaffer A., E-mail: jajani@mdanderson.org; Erasmus, Jeremy J., E-mail: jerasmus@mdanderson.org; Hofstetter, Wayne L., E-mail: whofstetter@mdanderson.org

    2017-03-15

    Introduction: Treatment of early esophageal cancer depends on the extent of the primary tumor and presence of regional lymph node metastasis.(RNM). Short axis diameter >10 mm is typically used to detect RNM. However, clinical determination of RNM is inaccurate and can result in inappropriate treatment. Purpose of this study is to evaluate the accuracy of a single linear measurement (short axis > 10 mm) of regional nodes on CT in predicting nodal metastasis, in patients with early esophageal cancer and whether using a mean diameter value (short axis + long axis/2) as well as nodal shape improves cN designation. Methods: CTs of 49 patients with cT1 adenocarcinoma treated with surgical resection alone were reviewed retrospectively. Regional nodes were considered positive for malignancy when round or ovoid and mean size >5 mm adjacent to the primary tumor and >7 mm when not adjacent. Results were compared with pN status after esophagectomy. Results: 18/49 patients had pN+ at resection. Using a single short axis diameter >10 mm on CT, nodal metastasis (cN) was positive in 7/49. Only 1 of these patients was pN+ at resection (sensitivity 5%, specificity 80%, accuracy 53%). Using mean size and morphologic criteria, cN was positive in 28/49. 11 of these patients were pN+ at resection (sensitivity 61%, specificity 45%, accuracy 51%). EUS with limited FNA of regional nodes resulted in 16/49 patients with pN+ being inappropriately designated as cN0. Conclusions: Evaluation of size, shape and location of regional lymph nodes on CT improves the sensitivity of cN determination compared with a short axis measurement alone in patients with cT1 esophageal cancer, although clinical utility is limited.

  11. Predictive inference for best linear combination of biomarkers subject to limits of detection.

    Science.gov (United States)

    Coolen-Maturi, Tahani

    2017-08-15

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning and credit scoring. The receiver operating characteristic (ROC) curve is a useful tool to assess the ability of a diagnostic test to discriminate between two classes or groups. In practice, multiple diagnostic tests or biomarkers are combined to improve diagnostic accuracy. Often, biomarker measurements are undetectable either below or above the so-called limits of detection (LoD). In this paper, nonparametric predictive inference (NPI) for best linear combination of two or more biomarkers subject to limits of detection is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modelling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. The NPI lower and upper bounds for the ROC curve subject to limits of detection are derived, where the objective function to maximize is the area under the ROC curve. In addition, the paper discusses the effect of restriction on the linear combination's coefficients on the analysis. Examples are provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Local indicators of geocoding accuracy (LIGA: theory and application

    Directory of Open Access Journals (Sweden)

    Jacquez Geoffrey M

    2009-10-01

    Full Text Available Abstract Background Although sources of positional error in geographic locations (e.g. geocoding error used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously and locally (to identify those locations that would benefit most from increased geocoding accuracy. We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error and high leverage (that contribute the most to the spatial weight being considered will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density

  13. Short-memory linear processes and econometric applications

    CERN Document Server

    Mynbaev, Kairat T

    2011-01-01

    This book serves as a comprehensive source of asymptotic results for econometric models with deterministic exogenous regressors. Such regressors include linear (more generally, piece-wise polynomial) trends, seasonally oscillating functions, and slowly varying functions including logarithmic trends, as well as some specifications of spatial matrices in the theory of spatial models. The book begins with central limit theorems (CLTs) for weighted sums of short memory linear processes. This part contains the analysis of certain operators in Lp spaces and their employment in the derivation of CLTs

  14. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  15. Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.

    Science.gov (United States)

    Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C

    2016-11-01

    To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. The accuracy of serum galactomannan assay in diagnosing invasive pulmonary aspergillosis.

    Science.gov (United States)

    Sarrafzadeh, Shokooh Azam; Hoseinpoor Rafati, Ali; Ardalan, Maryam; Mansouri, Davood; Tabarsi, Payam; Pourpak, Zahra

    2010-09-01

    Galactomannan (GM) antigen is an aspergillus specific antigen that is released during the growth phase of invasive aspergillosis. We aimed to find the optimum cutoff and accuracy of serum Galactomannan assay in immunocompromised patients. Immunocompromised patients diagnosed with invasive pulmonary aspergillosis (IPA) based on the European Organization for Research and Treatment of Cancer/Invasive Mycosis Study Group (EORTC/MSG) with three levels of certainty proven, probable and possible, referred for GM antigen measurement at Immunology, Asthma and Allergy Research Institute (IAARI) from 2006 to 2009 and if they met the criteria were enrolled in this study. Totally 49 patients with IPA were enrolled in our study. According to EORTC/MSG, patients categorized into three levels of certainty: They were diagnosed as 'proven' invasive pulmonary aspergillosis 16(32.7%), 'probable' 18(36.7%) and 'possible' 15(30.6%). The most common host risk factor was solid tumors 17(34.7%). The accuracy of Galactomannan assay increased from 0.5 to 2 cutoffs. The optimum sensitivity and specificity obtained at the index cutoff of ≥1.5 for diagnosis of "proven" IPA; which were respectively, 69.2% and 72.2%. Other cutoffs had high variance between sensitivity and specificity for diagnosis of IPA. The calculated cutoff gained by receiver operating characteristic (ROC) analysis for detecting proven IPA was 1.5. Intermediate accuracy of serum GM test in conjunct with clinical findings would help early IPA detection among immunocompromised patients.

  17. Suicide news reporting accuracy and stereotyping in Hong Kong.

    Science.gov (United States)

    Cheng, Qijin; Yip, Paul S F

    2012-12-10

    The mass media are often criticized for oversimplifying the causes of suicide and overlooking some of the risk factors. However, there is a lack of empirical evidence to support this assertion. The study aims to examine the accuracy of news reports in Hong Kong (HK) and in particular whether or not such reports stereotype victims according to gender and suicide method. Data from a case-control psychological autopsy (PA) study of 150 HK suicides were utilized. The reports of the PA cases from five major HK newspapers were collected and reviewed to identify whether or not there was a match in terms of the cases' profile and risk factors. The age, gender, and method of the suicides were largely reported correctly (>70%) but accounts of risk factors were seldom accurate (Media stereotyping of gender-specific suicide and charcoal-burning suicide was identified. The study was based on a HK sample so the findings are not necessarily applicable elsewhere. The HK mass media generally demonstrated poor accuracy in reporting suicide risk factors. Their reporting was also problematic in terms of stereotyping gender- and method-specific suicides. Clinical practitioners should be alerted to these findings when working with the media. They can also adopt this novel usage of PA data to extract further information from other PA studies and thereby broaden the investigation of reporting accuracy and stereotyping of suicide to more diverse social contexts. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Accuracy of a Computer-Aided Surgical Simulation (CASS) Protocol for Orthognathic Surgery: A Prospective Multicenter Study

    Science.gov (United States)

    Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.

    2012-01-01

    Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in

  19. Commissioning of a linear accelerator to execute volumetric modulated arc therapy; Comissionamento de um acelerador linear para realizacao da radioterapia em arco modulada volumetricamente

    Energy Technology Data Exchange (ETDEWEB)

    Goncalves, Leandro R.; Santos, Gabriela R.; Menegussi, Gisela; Silva, Marco A.; Passaro, Anderson M.; Rodrigues, Laura N., E-mail: leandrorg11@hotmail.com [Instituto do Cancer do Estado de Sao Paulo (ICESP), Sao Paulo, SP (Brazil)

    2013-08-15

    Radiotherapy techniques like VMAT allow complex dose distributions modulating the beam intensity within the irradiation field from the handling of multi-blade collimators, variations in dose rate, different speeds of rotation of the gantry and collimator angle allowing greater conformation of the dose to the tumor volume and a lower dose to healthy tissues. To ensure proper dose delivery, the linear particle accelerator must be able to monitor and perform all the variation in these parameters simultaneously. In this work dosimetric tests obtained in the literature that aims to commission, implement and ensure the quality of VMAT treatments were performed performed in the Institute of Cancer of Sao Paulo State (ICESP). From the results obtained it was established a program of quality control for the linear accelerator studied. The linearity and stability response of ionization chamber monitoring, leafs accuracy positioning, flatness and symmetry of beam to VMAT irradiations were evaluated. The obtained results are in agreement with the literature. It can be concluded that the accelerator studied is able to satisfactorily control the variation of all necessary parameters to perform the VMAT treatments. (author)

  20. Aliasing in the Complex Cepstrum of Linear-Phase Signals

    DEFF Research Database (Denmark)

    Bysted, Tommy Kristensen

    1997-01-01

    Assuming linear-phase of the associated time signal, this paper presents an approximated analytical description of the unavoidable aliasing in practical use of complex cepstrums. The linear-phase assumption covers two major applications of complex cepstrums which are linear- to minimum-phase FIR......-filter transformation and minimum-phase estimation from amplitude specifications. The description is made in the cepstrum domain, the Fourier transform of the complex cepstrum and in the frequency domain. Two examples are given, one for verification of the derived equations and one using the description to reduce...... aliasing in minimum-phase estimation...