WorldWideScience

Sample records for linearity specificity accuracy

  1. Accuracy Limitations in Optical Linear Algebra Processors

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  2. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  3. Rf system specifications for a linear accelerator

    Young, A.; Eaton, L.E.

    1992-01-01

    A linear accelerator contains many systems; however, the most complex and costly is the RF system. The goal of an RF system is usually simply stated as maintaining the phase and amplitude of the RF signal within a given tolerance to accelerate the charged particle beam. An RF system that drives a linear accelerator needs a complete system specification, which should contain specifications for all the subsystems (i.e., high-power RF, low-level RF, RF generation/distribution, and automation control). This paper defines a format for the specifications of these subsystems and discusses each RF subsystem independently to provide a comprehensive understanding of the function of each subsystem. This paper concludes with an example of a specification spreadsheet allowing one to input the specifications of a subsystem. Thus, some fundamental parameters (i.e., the cost and size) of the RF system can be determined

  4. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Downie, John D.; Goodman, Joseph W.

    1989-10-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  5. Accuracy enhancement of point triangulation probes for linear displacement measurement

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  6. 3D-Printed Linear Positioner with Micrometer Accuracy

    Kuo Yin-Yen

    2017-01-01

    Full Text Available This article presents a positioner, whose flexure main body is made by a commercial 3D printer. Using this method, manufacturing a positioner can be cost efficient and much easier to customize. Integrating a laser displacement sensor, an electromagnetic actuator, and a feedback controller, this positioning system has 100 micron translational stroke with 1 micron resolution. Experiments also demonstrate sinusoidal motions at different frequencies. Using the method developed by this article, micro-positioners with customized specifications can be implemented rapidly, iteratively, and cost-effectively.

  7. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  8. The maximally achievable accuracy of linear optimal regulators and linear optimal filters

    Kwakernaak, H.; Sivan, Raphael

    1972-01-01

    A linear system with a quadratic cost function, which is a weighted sum of the integral square regulation error and the integral square input, is considered. What happens to the integral square regulation error as the relative weight of the integral square input reduces to zero is investigated. In

  9. Prostate-specific antigen testing accuracy in community practice

    Adams-Cameron Meg

    2002-10-01

    Full Text Available Abstract Background Most data on prostate-specific antigen (PSA testing come from urologic cohorts comprised of volunteers for screening programs. We evaluated the diagnostic accuracy of PSA testing for detecting prostate cancer in community practice. Methods PSA testing results were compared with a reference standard of prostate biopsy. Subjects were 2,620 men 40 years and older undergoing (PSA testing and biopsy from 1/1/95 through 12/31/98 in the Albuquerque, New Mexico metropolitan area. Diagnostic measures included the area under the receiver-operating characteristic curve, sensitivity, specificity, and likelihood ratios. Results Cancer was detected in 930 subjects (35%. The area under the ROC curve was 0.67 and the PSA cutpoint of 4 ng/ml had a sensitivity of 86% and a specificity of 33%. The likelihood ratio for a positive test (LR+ was 1.28 and 0.42 for a negative test (LR-. PSA testing was most sensitive (90% but least specific (27% in older men. Age-specific reference ranges improved specificity in older men (49% but decreased sensitivity (70%, with an LR+ of 1.38. Lowering the PSA cutpoint to 2 ng/ml resulted in a sensitivity of 95%, a specificity of 20%, and an LR+ of 1.19. Conclusions PSA testing had fair discriminating power for detecting prostate cancer in community practice. The PSA cutpoint of 4 ng/ml was sensitive but relatively non-specific and associated likelihood ratios only moderately revised probabilities for cancer. Using age-specific reference ranges and a PSA cutpoint below 4 ng/ml improved test specificity and sensitivity, respectively, but did not improve the overall accuracy of PSA testing.

  10. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material

    Manoj, Smita Sara; Cherian, K. P.; Chitre, Vidya; Aras, Meena

    2013-01-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregu...

  11. Improving ASTER GDEM Accuracy Using Land Use-Based Linear Regression Methods: A Case Study of Lianyungang, East China

    Xiaoyan Yang

    2018-04-01

    Full Text Available The Advanced Spaceborne Thermal-Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM is important to a wide range of geographical and environmental studies. Its accuracy, to some extent associated with land-use types reflecting topography, vegetation coverage, and human activities, impacts the results and conclusions of these studies. In order to improve the accuracy of ASTER GDEM prior to its application, we investigated ASTER GDEM errors based on individual land-use types and proposed two linear regression calibration methods, one considering only land use-specific errors and the other considering the impact of both land-use and topography. Our calibration methods were tested on the coastal prefectural city of Lianyungang in eastern China. Results indicate that (1 ASTER GDEM is highly accurate for rice, wheat, grass and mining lands but less accurate for scenic, garden, wood and bare lands; (2 despite improvements in ASTER GDEM2 accuracy, multiple linear regression calibration requires more data (topography and a relatively complex calibration process; (3 simple linear regression calibration proves a practicable and simplified means to systematically investigate and improve the impact of land-use on ASTER GDEM accuracy. Our method is applicable to areas with detailed land-use data based on highly accurate field-based point-elevation measurements.

  12. A Linearized Relaxing Algorithm for the Specific Nonlinear Optimization Problem

    Mio Horai

    2016-01-01

    Full Text Available We propose a new method for the specific nonlinear and nonconvex global optimization problem by using a linear relaxation technique. To simplify the specific nonlinear and nonconvex optimization problem, we transform the problem to the lower linear relaxation form, and we solve the linear relaxation optimization problem by the Branch and Bound Algorithm. Under some reasonable assumptions, the global convergence of the algorithm is certified for the problem. Numerical results show that this method is more efficient than the previous methods.

  13. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. Copyright © 2014 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  14. Bounds on achievable accuracy in analog optical linear-algebra processors

    Batsell, Stephen G.; Walkup, John F.; Krile, Thomas F.

    1990-07-01

    Upper arid lower bounds on the number of bits of accuracy achievable are determined by applying a seconth-ortler statistical model to the linear algebra processor. The use of bounds was found necessary due to the strong signal-dependence of the noise at the output of the optical linear algebra processor (OLAP). 1 1. ACCURACY BOUNDS One of the limiting factors in applying OLAPs to real world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication ard addition operations spatial variations across arrays and crosstalk. We have previously examined these noise sources and determined a general model for the output noise mean and variance. The model demonstrates a strony signaldependency in the noise at the output of the processor which has been confirmed by our experiments. 1 We define accuracy similar to its definition for an analog signal input to an analog-to-digital (ND) converter. The number of bits of accuracy achievable is related to the log (base 2) of the number of separable levels at the P/D converter output. The number of separable levels is fouri by dividing the dynamic range by m times the standard deviation of the signal a. 2 Here m determines the error rate in the P/D conversion. The dynamic range can be expressed as the

  15. Improvement on the accuracy of beam bugs in linear induction accelerator

    Xie Yutong; Dai Zhiyong; Han Qing

    2002-01-01

    In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree

  16. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  17. A high-accuracy optical linear algebra processor for finite element applications

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  18. Accuracy of panoramic radiography and linear tomography in mandibular canal localization

    Bashizadeh Fakhar H.

    2008-10-01

    Full Text Available "nBackground and Aim: Accurate bone measurements are essential to determine the optimal size and length of dental implants. The magnification factor of radiographic images may vary with the imaging technique used. The purpose of this study was to compare the accuracy of linear tomography and panoramic radiography in vertical measurements, as well as the accuracy of linear tomography in mandibular width estimation. "nMaterials and Methods: In this test evaluation study, the vertical distances between the crest and the superior border of the inferior alveolar canal, marked with a metal ball, was measured by linear tomography and panoramic radiography in 23 sites of four dry mandible bones. Also the mandibular width was measured at the same sites. Then, the bones were sectioned through the marked spots and the radiographic measurements were compared with actual values. "nResults: The vertical magnification factor in tomograms and panoramic radiographs was 1.79 (SD=0.17 and 1.69 (SD=0.23, respectively. The horizontal magnification of tomograms was 1.47 (SD=0.17. A significant correlation was found between the linear tomographic and actual values, regarding vertical dimensions (p<0.001, r=0.968 and width (p<0.001, r=0.813. The correlation was significant but lower in panoramic radiographs (p<0.001, r=0.795. Applying the magnification values suggested by the manufacturer, the mean difference of vertical measurements between the tomographic sections was 2.5 mm (SD=3.4 but 3.8 mm (SD=1.65 in panoramic radiographs. The mean of absolute difference in mandibular width between the tomographic sections and reality was 0.3mm (SD=1.13. In the linear tomograms, 4.3% of vertical and 56.5% of the width measurements were in the ±1mm error limit. Only 4.3% of the vertical measurements were within this range in the panthomographs. The linear regression equation between the actual values and those obtained by radiography in vertical dimensions showed that 87.5% of

  19. Diagnostic accuracy of full-body linear X-ray scanning in multiple trauma patients in comparison to computed tomography

    Joeres, A.P.W.; Heverhagen, J.T.; Bonel, H. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Exadaktylos, A. [Inselspital - University Hospital Bern (Switzerland). Dept. of Emergency Medicine; Klink, T. [Inselspital - University Hospital Bern (Switzerland). Univ. Inst. of Diagnostic, Interventional and Pediatric Radiology; Wuerzburg Univ. (Germany). Inst. of Diagnostic and Interventional Radiology

    2016-02-15

    The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2%, the specificity was 93.3%, the positive predictive value was 91%, and the negative predictive value was 57.5%. The overall sensitivity for vertebral fractures was 16.7%, and the specificity was 100%. The sensitivity was 48.7% and the specificity 98.2% for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS.40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT.

  20. The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.

    Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y

    2016-04-01

    To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.

  1. Cross-beam energy transfer: On the accuracy of linear stationary models in the linear kinetic regime

    Debayle, A.; Masson-Laborde, P.-E.; Ruyer, C.; Casanova, M.; Loiseau, P.

    2018-05-01

    We present an extensive numerical study by means of particle-in-cell simulations of the energy transfer that occurs during the crossing of two laser beams. In the linear regime, when ions are not trapped in the potential well induced by the laser interference pattern, a very good agreement is obtained with a simple linear stationary model, provided the laser intensity is sufficiently smooth. These comparisons include different plasma compositions to cover the strong and weak Landau damping regimes as well as the multispecies case. The correct evaluation of the linear Landau damping at the phase velocity imposed by the laser interference pattern is essential to estimate the energy transfer rate between the laser beams, once the stationary regime is reached. The transient evolution obtained in kinetic simulations is also analysed by means of a full analytical formula that includes 3D beam energy exchange coupled with the ion acoustic wave response. Specific attention is paid to the energy transfer when the laser presents small-scale inhomogeneities. In particular, the energy transfer is reduced when the laser inhomogeneities are comparable with the Landau damping characteristic length of the ion acoustic wave.

  2. Travel-time source-specific station correction improves location accuracy

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  3. Accuracy of linear measurement using cone-beam computed tomography at different reconstruction angles

    Nikneshan, Nikneshan; Aval, Shadi Hamidi; Bakhshalian, Neema; Shahab, Shahriyar; Mohammadpour, Mahdis; SarikhanI, Soodeh

    2014-01-01

    This study was performed to evaluate the effect of changing the orientation of a reconstructed image on the accuracy of linear measurements using cone-beam computed tomography (CBCT). Forty-two titanium pins were inserted in seven dry sheep mandibles. The length of these pins was measured using a digital caliper with readability of 0.01 mm. Mandibles were radiographed using a CBCT device. When the CBCT images were reconstructed, the orientation of slices was adjusted to parallel (i.e., 0 degrees), +10 degrees, +12 degrees, -12 degrees, and -10 degrees with respect to the occlusal plane. The length of the pins was measured by three radiologists, and the accuracy of these measurements was reported using descriptive statistics and one-way analysis of variance (ANOVA); p<0.05 was considered statistically significant. The differences in radiographic measurements ranged from -0.64 to +0.06 at the orientation of -12 degrees, -0.66 to -0.11 at -10 degrees, -0.51 to +0.19 at 0 degrees, -0.64 to +0.08 at +10 degrees, and -0.64 to +0.1 at +12 degrees. The mean absolute values of the errors were greater at negative orientations than at the parallel position or at positive orientations. The observers underestimated most of the variables by 0.5-0.1 mm (83.6%). In the second set of observations, the reproducibility at all orientations was greater than 0.9. Changing the slice orientation in the range of -12 degrees to +12 degrees reduced the accuracy of linear measurements obtained using CBCT. However, the error value was smaller than 0.5 mm and was, therefore, clinically acceptable.

  4. Accuracy of Linear Measurements in Stitched Versus Non-Stitched Cone Beam Computed Tomography Images

    Srimawong, P.; Krisanachinda, A.; Chindasombatjaroen, J.

    2012-01-01

    Cone beam computed tomography images are useful in clinical dentistry. Linear measurements are necessary for accurate treatment planning.Therefore, the accuracy of linear measurements on CBCT images is needed to be verified. Current program called stitching program in Kodak 9000C 3D systems automatically combines up to three localized volumes to construct larger images with small voxel size.The purpose of this study was to assess the accuracy of linear measurements from stitched and non-stitched CBCT images in comparison to direct measurements.This study was performed in 10 human dry mandibles. Gutta-percha rods were marked at reference points to obtain 10 vertical and horizontal distances. Direct measurements by digital caliper were served as gold standard. All distances on CBCT images obtained by using and not using stitching program were measured, and compared with direct measurements.The intraclass correlation coefficients (ICC) were calculated.The ICC of direct measurements were 0.998 to 1.000.The ICC of intraobserver of both non-stitched CBCT images and stitched CBCT images were 1.000 indicated strong agreement made by a single observer.The intermethod ICC between direct measurements vs non-stitched CBCT images and direct measurements vs stitched CBCT images ranged from 0.972 to 1.000 and 0.967 to 0.998, respectively. No statistically significant differences between direct measurements and stitched CBCT images or non-stitched CBCT images (P > 0.05). The results showed that linear measurements on non-stitched and stitched CBCT images were highly accurate with no statistical difference compared to direct measurements. The ICC values in non-stitched and stitched CBCT images and direct measurements of vertical distances were slightly higher than those of horizontal distances. This indicated that the measurements in vertical orientation were more accurate than those in horizontal orientation. However, the differences were not statistically significant. Stitching

  5. Evaluation of the accuracy of linear and angular measurements on panoramic radiographs taken at different positions

    Nikneshan, Sima; Emadi, Naghmeh [Dept. of Oral and Maxillofacial Radiology, Dental School, Shahid Beheshti University of Medical Sciences, Tehran (Iran, Islamic Republic of); Sharafi, Mohamad [Dept. of Oral and Maxillofacial Radiology, Dental School, Ilam University of Medical Sciences, Ilam (Iran, Islamic Republic of)

    2013-09-15

    This study assessed the accuracy of linear and angular measurements on panoramic radiographs taken at different positions in vitro. Two acrylic models were fabricated from a cast with normal occlusion. Straight and 75 degree mesially and lingually angulated pins were placed, and standardized panoramic radiographs were taken at standard position, at an 8 degree downward tilt of the occlusal plane compared to the standard position, at an 8 degree upward tilt of the anterior occlusal plane, and at a 10 degree downward tilt of the right and left sides of the model. On the radiographs, the length of the pins above (crown) and below (root) the occlusal plane, total pin length, crown-to-root ratio, and angulation of pins relative to the occlusal plane were calculated. The data were subjected to repeated measures ANOVA and LSD multiple comparisons tests. Significant differences were noted between the radiographic measurements and true values in different positions on both models with linear (P<0.001) and those with angulated pins (P<0.005). No statistically significant differences were observed between the angular measurements and baselines of the natural head posture at different positions for the linear and angulated pins. Angular measurements on panoramic radiographs were sufficiently accurate and changes in the position of the occlusal plane equal to or less than 10 degree had no significant effect on them. Some variations could exist in the pin positioning (head positioning), and they were tolerable while taking panoramic radiographs. Linear measurements showed the least errors in the standard position and 8 degree upward tilt of the anterior part of the occlusal plane compared to other positions.

  6. Accuracy evaluation of the optical surface monitoring system on EDGE linear accelerator in a phantom study.

    Mancosu, Pietro; Fogliata, Antonella; Stravato, Antonella; Tomatis, Stefano; Cozzi, Luca; Scorsetti, Marta

    2016-01-01

    Frameless stereotactic radiosurgery (SRS) requires dedicated systems to monitor the patient position during the treatment to avoid target underdosage due to involuntary shift. The optical surface monitoring system (OSMS) is here evaluated in a phantom-based study. The new EDGE linear accelerator from Varian (Varian, Palo Alto, CA) integrates, for cranial lesions, the common cone beam computed tomography (CBCT) and kV-MV portal images to the optical surface monitoring system (OSMS), a device able to detect real-time patient׳s face movements in all 6 couch axes (vertical, longitudinal, lateral, rotation along the vertical axis, pitch, and roll). We have evaluated the OSMS imaging capability in checking the phantoms׳ position and monitoring its motion. With this aim, a home-made cranial phantom was developed to evaluate the OSMS accuracy in 4 different experiments: (1) comparison with CBCT in isocenter location, (2) capability to recognize predefined shifts up to 2° or 3cm, (3) evaluation at different couch angles, (4) ability to properly reconstruct the surface when the linac gantry visually block one of the cameras. The OSMS system showed, with a phantom, to be accurate for positioning in respect to the CBCT imaging system with differences of 0.6 ± 0.3mm for linear vector displacement, with a maximum rotational inaccuracy of 0.3°. OSMS presented an accuracy of 0.3mm for displacement up to 1cm and 1°, and 0.5mm for larger displacements. Different couch angles (45° and 90°) induced a mean vector uncertainty < 0.4mm. Coverage of 1 camera produced an uncertainty < 0.5mm. Translations and rotations of a phantom can be accurately detect with the optical surface detector system. Copyright © 2016 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  7. Evaluation of the accuracy of panoramic radiography in linear measurements of the jaws

    Hoseini, S. H.; Bagherpour, A.; Javadian Langaroodi, A.; Ahmadian Yazdi, A.; Safaei, A.

    2011-01-01

    Panoramic radiography has a great place among imaging techniques because of its enormous advantages. One of the characteristics of an ideal imaging technique is to supply precise measurement. The purpose of the current study was to evaluate the accuracy of linear measurements of the jaws on panoramic radiographs. Patients and Methods: In this study, the vertical distances between the metal markers were measured by panoramic radiography in seven sites of two skulls in various head positions. Then the radiographic measurements were compared with the actual values. Results: Eighty three percent of the measurements were underestimated, 8.5% were overestimated on panoramic radiography and 8.5% of the measurements had no difference with the real measurements. Overestimation was not greater than 1 mm. The difference between actual and radiographic measurements was less in the posterior areas and in the mandible . In all head positions, the greatest difference between actual and radiographic measurements occurred in the anterior area. Conclusion: Based on the results of this study, linear measurements on panoramic radiography are more reliable in the posterior areas and may be used in early clinical measurements.

  8. The linear interplay of intrinsic and extrinsic noises ensures a high accuracy of cell fate selection in budding yeast

    Li, Yongkai; Yi, Ming; Zou, Xiufen

    2014-01-01

    To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292

  9. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material.

    Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena

    2013-12-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly

  10. Analysis on Accuracy of Bias, Linearity and Stability of Measurement System in Ball screw Processes by Simulation

    Fan-Yun Pai

    2015-11-01

    Full Text Available To consistently produce high quality products, a quality management system, such as the ISO9001, 2000 or TS 16949 must be practically implemented. One core instrument of the TS16949 MSA (Measurement System Analysis is to rank the capability of a measurement system and ensure the quality characteristics of the product would likely be transformed through the whole manufacturing process. It is important to reduce the risk of Type I errors (acceptable goods are misjudged as defective parts and Type II errors (defective parts are misjudged as good parts. An ideal measuring system would have the statistical characteristic of zero error, but such a system could hardly exist. Hence, to maintain better control of the variance that might occur in the manufacturing process, MSA is necessary for better quality control. Ball screws, which are a key component in precision machines, have significant attributes with respect to positioning and transmitting. Failures of lead accuracy and axial-gap of a ball screw can cause negative and expensive effects in machine positioning accuracy. Consequently, a functional measurement system can incur great savings by detecting Type I and Type II errors. If the measurement system fails with respect to specification of the product, it will likely misjudge Type I and Type II errors. Inspectors normally follow the MSA regulations for accuracy measurement, but the choice of measuring system does not merely depend on some simple indices. In this paper, we examine the stability of a measuring system by using a Monte Carlo simulation to establish bias, linearity variance of the normal distribution, and the probability density function. Further, we forecast the possible area distribution in the real case. After the simulation, the measurement capability will be improved, which helps the user classify the measurement system and establish measurement regulations for better performance and monitoring of the precision of the ball screw.

  11. Accuracy of linear drilling in temporal bone using drill press system for minimally invasive cochlear implantation.

    Dillon, Neal P; Balachandran, Ramya; Labadie, Robert F

    2016-03-01

    A minimally invasive approach for cochlear implantation involves drilling a narrow linear path through the temporal bone from the skull surface directly to the cochlea for insertion of the electrode array without the need for an invasive mastoidectomy. Potential drill positioning errors must be accounted for to predict the effectiveness and safety of the procedure. The drilling accuracy of a system used for this procedure was evaluated in bone surrogate material under a range of clinically relevant parameters. Additional experiments were performed to isolate the error at various points along the path to better understand why deflections occur. An experimental setup to precisely position the drill press over a target was used. Custom bone surrogate test blocks were manufactured to resemble the mastoid region of the temporal bone. The drilling error was measured by creating divots in plastic sheets before and after drilling and using a microscope to localize the divots. The drilling error was within the tolerance needed to avoid vital structures and ensure accurate placement of the electrode; however, some parameter sets yielded errors that may impact the effectiveness of the procedure when combined with other error sources. The error increases when the lateral stage of the path terminates in an air cell and when the guide bushings are positioned further from the skull surface. At contact points due to air cells along the trajectory, higher errors were found for impact angles of [Formula: see text] and higher as well as longer cantilevered drill lengths. The results of these experiments can be used to define more accurate and safe drill trajectories for this minimally invasive surgical procedure.

  12. Accuracy and Reliability of Cone-Beam Computed Tomography for Linear and Volumetric Mandibular Condyle Measurements. A Human Cadaver Study.

    García-Sanz, Verónica; Bellot-Arcís, Carlos; Hernández, Virginia; Serrano-Sánchez, Pedro; Guarinos, Juan; Paredes-Gallardo, Vanessa

    2017-09-20

    The accuracy of Cone-Beam Computed Tomography (CBCT) on linear and volumetric measurements on condyles has only been assessed on dry skulls. The aim of this study was to evaluate the reliability and accuracy of linear and volumetric measurements of mandibular condyles in the presence of soft tissues using CBCT. Six embalmed cadaver heads were used. CBCT scans were taken, followed by the extraction of the condyles. The water displacement technique was used to calculate the volumes of the condyles and three linear measurements were made using a digital caliper, these measurements serving as the gold standard. Surface models of the condyles were obtained using a 3D scanner, and superimposed onto the CBCT images. Condyles were isolated on the CBCT render volume using the surface models as reference and volumes were measured. Linear measurements were made on CBCT slices. The CBCT method was found to be reliable for both volumetric and linear measurements (CV  0.90). Highly accurate values were obtained for the three linear measurements and volume. CBCT is a reliable and accurate method for taking volumetric and linear measurements on mandibular condyles in the presence of soft tissue, and so a valid tool for clinical diagnosis.

  13. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  14. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  15. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  16. Structural linear measurements in the newborn brain: accuracy of cranial ultrasound compared to MRI

    Leijser, Lara M.; Srinivasan, Latha; Cowan, Frances M.; Rutherford, Mary A.; Counsell, Serena J.; Allsop, Joanna M.

    2007-01-01

    Structural size in the neonatal brain is of clinical importance. Cranial ultrasonography (cUS) is the primary method used for evaluating the neonatal brain and it is important to know whether linear measurements made using this technique are accurate. To compare linear measurements of different cerebral structures made from neonatal cUS and contemporaneous MRI. Preterm and term infants studies with cUS and MRI on the same day were studied. Linear measurements made using both techniques from many cerebral structures were compared using a paired t-test. A total of 44 sets of scans from 26 preterm and 8 term infants were assessed. Small but significant differences between the cUS and MRI measurements (P<0.05) were found for the ventricular index, the posterior horn depth of the lateral ventricle, the extracerebral space and interhemispheric fissure, and the cortex of the cingulate gyrus. No significant differences were found for any other measurements. Linear measurements from cUS are accurate for most neonatal cerebral structures. Significant differences compared to MRI were found for a few structures, but only for the cortex were the absolute differences marked and possibly of clinical importance. (orig.)

  17. The JPL Hg(sup +) Extended Linear Ion Trap Frequency Standard: Status, Stability, and Accuracy Prospects

    Tjoelker, R. L.; Prestage, J. D.; Maleki, L.

    1996-01-01

    Microwave frequency standards based on room temperature (sup 199)Hg(sup +) ions in a Linear Ion Trap (LITS) presently achieve a Signal to Noise and line Q inferred short frequency stability. Long term stability has been measured for averaging intervals up to 5 months with apparent sensitivity to variations in ion number/temperature limiting the flicker floor.

  18. Improved Haptic Linear Lines for Better Movement Accuracy in Upper Limb Rehabilitation

    Joan De Boeck

    2012-01-01

    Full Text Available Force feedback has proven to be beneficial in the domain of robot-assisted rehabilitation. According to the patients' personal needs, the generated forces may either be used to assist, support, or oppose their movements. In our current research project, we focus onto the upper limb training for MS (multiple sclerosis and CVA (cerebrovascular accident patients, in which a basic building block to implement many rehabilitation exercises was found. This building block is a haptic linear path: a second-order continuous path, defined by a list of points in space. Earlier, different attempts have been investigated to realize haptic linear paths. In order to have a good training quality, it is important that the haptic simulation is continuous up to the second derivative while the patient is enforced to follow the path tightly, even when low or no guiding forces are provided. In this paper, we describe our best solution to these haptic linear paths, discuss the weaknesses found in practice, and propose and validate an improvement.

  19. Intra- and inter-observer variability and accuracy in the determination of linear and angular measurements in computed tomography

    Christiansen, E.L.; Thompson, J.R.; Kopp, S.

    1986-01-01

    The observer variability and accuracy of linear and angular computed tomography (CT) software measurements in the transaxial plane were investigated for the temporomandibular joint with the General Electric 8800 CT/N Scanner. A dried and measured human mandible was embedded in plastic and scanned in vitro. Sixteen observers participated in the study. The following measurements were tested: inter- and extra-condylar distances, transverse condylar dimension, condylar angulation, and the plastic base of the specimen. Three frozen cadaveric heads were similarly scanned and measured in situ. Intra- and inter-observer variabilities were lowest for the specimen base and highest for condylar angulation. Neuroradiologists had the lowest variability as a group, and the radiology residents and paramedical personell had the highest, but the differences were small. No significant difference was found between CT and macroscopic measurement of the mandible. In situ measurement by CT of condyles with structural changes in the transaxial plane was, however, subject to substantial error. It was concluded that transaxial linear measurements of the condylar processes free of significant structural changes had an error and an accuracy well within acceptable limits. The error for angular measurements was significantly greater than the error for linear measurements

  20. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression

    Lemieux Sébastien

    2006-08-01

    Full Text Available Abstract Background The identification of differentially expressed genes (DEGs from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. Results On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. Conclusion The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  1. Precision, accuracy and linearity of radiometer EML 105 whole blood metabolite biosensors.

    Cobbaert, C; Morales, C; van Fessem, M; Kemperman, H

    1999-11-01

    The analytical performance of a new, whole blood glucose and lactate electrode system (EML 105 analyser. Radiometer Medical A/S. Copenhagen, Denmark) was evaluated. Between-day coefficients of variation were glucose and lactate, respectively. Recoveries of glucose were 100 +/- 10% using either aqueous or protein-based standards. Recoveries of lactate depended on the matrix, being underestimated in aqueous standards (approximately -10%) and 95-100% in standards containing 40 g/L albumin at lactate concentrations of 15 and 30 mmol/L. However, recoveries were high (up to 180%) at low lactate concentrations in protein-based standards. Carry-over, investigated according to National Clinical Chemistry Laboratory Standards EP10-T2, was negligible (alpha = 0.01). Glucose and lactate biosensors equipped with new membranes were linear up to 60 and 30 mmol/L, respectively. However, linearity fell upon daily use with increasing membrane lifetime. We conclude that the Radiometer metabolite biosensor results are reproducible and do not suffer from specimen-related carry-over. However, lactate recovery depends on the protein content and the lactate concentration.

  2. On the accuracy of mode-superposition analysis of linear systems under stochastic agencies

    Bellomo, M.; Di Paola, M.; La Mendola, L.; Muscolino, G.

    1987-01-01

    This paper deals with the response of linear structures using modal reduction. The MAM (mode acceleration method) correction is extended to stochastic analysis in the stationary case. In this framework the response of the given structure must be described in a probabilistic sense and the spectral moments of the nodal response must be computed in order to obtain a full description of the vibratory stochastic phenomenon. In the deterministic analysis the response is substantially made up of two terms, one of which accounts for the dynamic response due to the lower modes while the second accounts for the contribution due to the higher modes. In stochastic analysis the nodal spectral moments are made up of three terms; the first accounts for the spectral moments of the dynamic response due to the lower modes, the second accounts for the spectral moments of input and the third accounts for the cross-spectral moments between the input and the nodal output. The analysis is applied to a 35-storey building subjected to wind multivariate environments. (orig./HP)

  3. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  4. Accuracy and reliability of linear cephalometric measurements from cone-beam computed tomography scans of a dry human skull.

    Berco, Mauricio; Rigali, Paul H; Miner, R Matthew; DeLuca, Stephelynn; Anderson, Nina K; Will, Leslie A

    2009-07-01

    The purpose of this study was to determine the accuracy and reliability of 3-dimensional craniofacial measurements obtained from cone-beam computed tomography (CBCT) scans of a dry human skull. Seventeen landmarks were identified on the skull. CBCT scans were then obtained, with 2 skull orientations during scanning. Twenty-nine interlandmark linear measurements were made directly on the skull and compared with the same measurements made on the CBCT scans. All measurements were made by 2 operators on 4 separate occasions. The method errors were 0.19, 0.21, and 0.19 mm in the x-, y- and z-axes, respectively. Repeated measures analysis of variance (ANOVA) showed no significant intraoperator or interoperator differences. The mean measurement error was -0.01 mm (SD, 0.129 mm). Five measurement errors were found to be statistically significantly different; however, all measurement errors were below the known voxel size and clinically insignificant. No differences were found in the measurements from the 2 CBCT scan orientations of the skull. CBCT allows for clinically accurate and reliable 3-dimensional linear measurements of the craniofacial complex. Moreover, skull orientation during CBCT scanning does not affect the accuracy or the reliability of these measurements.

  5. Accuracy of specific BIVA for the assessment of body composition in the United States population.

    Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C; Marini, Elisabetta

    2013-01-01

    Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old) derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th) and above the 95(th) percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.

  6. Accuracy of specific BIVA for the assessment of body composition in the United States population.

    Roberto Buffa

    Full Text Available BACKGROUND: Bioelectrical impedance vector analysis (BIVA is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. OBJECTIVE: Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. DESIGN: A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th and above the 95(th percentiles of percent fat (FMDXA% and extracellular/intracellular water (ECW/ICW ratio were evaluated by receiver operating characteristic (ROC curves. Classic and specific BIVA results were compared by a probit multiple-regression. RESULTS: Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002. The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96 and similarly performed by the two procedures (p = 0.829. The accuracy of specific BIVA was similar in the two sexes (p = 0.144 and in FMDXA% and ECW/ICW (p = 0.869. CONCLUSIONS: Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.

  7. THE CONTRIBUTION OF COMPLEXITY, ACCURACY AND FLUENCY TO LANGUAGE FOR SPECIFIC PURPOSES

    Anthony Rausch

    2016-03-01

    Full Text Available This paper will outline an instructional approach that proposes a Complexity, Accuracy, Fluency (CAF paradigm as a means of providing learners with the CAF-based communication consciousness and CAF-oriented manipulative skills that are increasingly important in language use in Language for Specific Purposes. Given the complex combinations of communicative tasks, communicative formats and communicative circumstances that accompany the wide-ranging and various contexts of contemporary professional communication, communicative competence demands a combinative consciousness and informed application of Complexity, Accuracy and Fluency as a communication paradigm. Viewed as a combination of its three components, a CAF paradigm constitutes a fundamental ‘information, language and communication’ triad that can guide professional language use in any communicative circumstance. Viewed as a communicative skill set, the CAF triad implies the capability to adjust specific elements and aspects of information, language and communication as needed for a communicative task, whether in oral or print communication and regardless of task category. Adjusting complexity in this context refers to both content and language complexity. Adjusting accuracy refers to the conventions that dictate appropriate or acceptable language in a given context. Finally, adjusting fluency refers to a sense of communicative fluency, that which yields either smooth and persuasive language as in a native-speaker normative view or explicit and clearly explanatory language as necessary in some communicative encounters. The need to manipulate these three components depends on circumstance variables such as objective, available time, audience characteristics and the degree of detail desired. This paper will outline this combinative CAF notion as background to a materials development project being undertaken in a Japanese university, introducing the specifics of an Extended Reading Aloud

  8. Specificity of the Linear Array HPV Genotyping Test for detecting human papillomavirus genotype 52 (HPV-52)

    Kocjan, Boštjan; Poljak, Mario; Oštrbenk, Anja

    2015-01-01

    Introduction: HPV-52 is one of the most frequent human papillomavirus (HPV) genotypes causing significant cervical pathology. The most widely used HPV genotyping assay, the Roche Linear Array HPV Genotyping Test (Linear Array), is unable to identify HPV- 52 status in samples containing HPV-33, HPV-35, and/or HPV-58. Methods: Linear Array HPV-52 analytical specificity was established by testing 100 specimens reactive with the Linear Array HPV- 33/35/52/58 cross-reactive probe, but not with the...

  9. Acute appendicitis: sensitivity, specificity and diagnostic accuracy of thin-section contrast-enhanced CT findings

    Lee, Ji Yon; Choi, Dong Il; Park, Hae Won; Lee, Young Rae; Kook, Shin Ho; Kwang, Hyon Joo; Kim, Seung Kwon; Chung, Eun Chul

    2002-01-01

    To assess the sensitivity, specificity, and diagnostic accuracy of individual contrast-enhanced helical CT findings of acute appendicitis. We retrospectively reviewed the appendiceal helical CT scans, obtained after intravenous contrast administration (abdomen; 7-mm collimation, abdominopelvic junction; 5-mm collimation), of 50 patients with surgically proven acute appendicitis and 112 with alternative diagnoses. The following parameters were analysed by three radiologists: enlarged appendix (>6 mm in diameter), appendiceal wall thickening, appendiceal wall enhancement, no identification of the appendix, appendicolith(s), (appendiceal) intraluminal air, abscess, lymphadenopathy, terminal ileal wall thickening, focal cecal apical thickening, focal colonic wall thickening, and segmental colonic wall thickening. The CT findings of acute appendicitis that statistically distinguished it from alternative diagnoses were an enlarged appendix (sensitivity; 92%, specificity; 93%, diagnostic accuracy; 93%), appendiceal wall thickening (for these three parameters: 68%, 96% and 88%, respectively), periappendiceal fat stranding (90%, 79%, 82%), appendiceal wall enhancement (72%, 86%, 82%), appendicolith (16%, 100%, 74%), and focal cecal apical thickening (14%, 100%, 74%) (for each, p<0305). On thin-section contrast-enhanced helical CT, an enlarged appendix and periappendiceal fat stranding were found in 90% or more patients with acute appendicitis. Appendiceal wall thickening and enhancement were alearly demonstrated and significant findings for diagnosis. Less common but specific findings include appendicolith, focal cecal apical thickening and intramural air, can also help us establish a diagnosis of acute appendicitis

  10. The robustness and accuracy of in vivo linear wear measurements for knee prostheses based on model-based RSA.

    van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L

    2011-10-13

    Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Interlaboratory diagnostic accuracy of a Salmonella specific PCR-based method

    Malorny, B.; Hoorfar, Jeffrey; Hugas, M.

    2003-01-01

    A collaborative study involving four European laboratories was conducted to investigate the diagnostic accuracy of a Salmonella specific PCR-based method, which was evaluated within the European FOOD-PCR project (http://www.pcr.dk). Each laboratory analysed by the PCR a set of independent obtained...... presumably naturally contaminated samples and compared the results with the microbiological culture method. The PCR-based method comprised a preenrichment step in buffered peptone water followed by a thermal cell lysis using a closed tube resin-based method. Artificially contaminated minced beef and whole......-based diagnostic methods and is currently proposed as international standard document....

  12. Linear Discriminant Analysis achieves high classification accuracy for the BOLD fMRI response to naturalistic movie stimuli.

    Hendrik eMandelkow

    2016-03-01

    Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  13. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Comparative evaluation of the accuracy of linear measurements between cone beam computed tomography and 3D microtomography

    Francesca Mangione

    2013-09-01

    Full Text Available OBJECTIVE: The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT system used in dental clinical practice, by comparing it with microCT system as standard reference. MATERIALS AND METHODS: Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. RESULTS AND DISCUSSION: Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. CONCLUSION: With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.

  15. Accuracy increase of the coordinate measurement based on the model production of geometrical parts specifications

    Zlatkina, O. Yu

    2018-04-01

    There is a relationship between the service properties of component parts and their geometry; therefore, to predict and control the operational characteristics of parts and machines, it is necessary to measure their geometrical specifications. In modern production, a coordinate measuring machine is the advanced measuring instrument of the products geometrical specifications. The analysis of publications has shown that during the coordinate measurements the problems of choosing locating chart of parts and coordination have not been sufficiently studied. A special role in the coordination of the part is played by the coordinate axes informational content. Informational content is the sum of the degrees of freedom limited by the elementary item of a part. The coordinate planes of a rectangular coordinate system have different informational content (three, two, and one). The coordinate axes have informational content of four, two and zero. The higher the informational content of the coordinate plane or axis, the higher its priority for reading angular and linear coordinates is. The geometrical model production of the coordinate measurements object taking into account the information content of coordinate planes and coordinate axes allows us to clearly reveal the interrelationship of the coordinates of the deviations in location, sizes and deviations of their surfaces shape. The geometrical model helps to select the optimal locating chart of parts for bringing the machine coordinate system to the part coordinate system. The article presents an algorithm the model production of geometrical specifications using the example of the piston rod of a compressor.

  16. Linearized method: A new approach for kinetic analysis of central dopamine D2 receptor specific binding

    Watabe, Hiroshi; Hatazawa, Jun; Ishiwata, Kiichi; Ido, Tatsuo; Itoh, Masatoshi; Iwata, Ren; Nakamura, Takashi; Takahashi, Toshihiro; Hatano, Kentaro

    1995-01-01

    The authors proposed a new method (Linearized method) to analyze neuroleptic ligand-receptor specific binding in a human brain using positron emission tomography (PET). They derived the linear equation to solve four rate constants, k 3 , k 4 , k 5 , k 6 from PET data. This method does not demand radioactivity curve in plasma as an input function to brain, and can do fast calculations in order to determine rate constants. They also tested Nonlinearized method including nonlinear equations which is conventional analysis using plasma radioactivity corrected for ligand metabolites as an input function. The authors applied these methods to evaluate dopamine D 2 receptor specific binding of [ 11 C] YM-09151-2. The value of B max /K d = k 3 k 4 obtained by Linearized method was 5.72 ± 3.1 which was consistent with the value of 5.78 ± 3.4 obtained by Nonlinearized method

  17. Targeting Accuracy of Image-Guided Radiosurgery for Intracranial Lesions: A Comparison Across Multiple Linear Accelerator Platforms.

    Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Brown, Stephen; Gordon, James; Wen, Ning

    2016-04-01

    To evaluate the overall positioning accuracy of image-guided intracranial radiosurgery across multiple linear accelerator platforms. A computed tomography scan with a slice thickness of 1.0 mm was acquired of an anthropomorphic head phantom in a BrainLAB U-frame mask. The phantom was embedded with three 5-mm diameter tungsten ball bearings, simulating a central, a left, and an anterior cranial lesion. The ball bearings were positioned to radiation isocenter under ExacTrac X-ray or cone-beam computed tomography image guidance on 3 Linacs: (1) ExacTrac X-ray localization on a Novalis Tx; (2) cone-beam computed tomography localization on the Novalis Tx; (3) cone-beam computed tomography localization on a TrueBeam; and (4) cone-beam computed tomography localization on an Edge. Each ball bearing was positioned 5 times to the radiation isocenter with different initial setup error following the 4 image guidance procedures on the 3 Linacs, and the mean (µ) and one standard deviation (σ) of the residual error were compared. Averaged overall 3 ball bearing locations, the vector length of the residual setup error in mm (µ ± σ) was 0.6 ± 0.2, 1.0 ± 0.5, 0.2 ± 0.1, and 0.3 ± 0.1 on ExacTrac X-ray localization on a Novalis Tx, cone-beam computed tomography localization on the Novalis Tx, cone-beam computed tomography localization on a TrueBeam, and cone-beam computed tomography localization on an Edge, with their range in mm being 0.4 to 1.1, 0.4 to 1.9, 0.1 to 0.5, and 0.2 to 0.6, respectively. The congruence between imaging and radiation isocenters in mm was 0.6 ± 0.1, 0.7 ± 0.1, 0.3 ± 0.1, and 0.2 ± 0.1, for the 4 systems, respectively. Targeting accuracy comparable to frame-based stereotactic radiosurgery can be achieved with image-guided intracranial stereotactic radiosurgery treatment. © The Author(s) 2015.

  18. Reproducibility and accuracy of linear measurements on dental models derived from cone-beam computed tomography compared with digital dental casts

    Waard, O. de; Rangel, F.A.; Fudalej, P.S.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Breuning, K.H.

    2014-01-01

    INTRODUCTION: The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models

  19. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  20. Linear accuracy and reliability of volume data sets acquired by two CBCT-devices and an MSCT using virtual models : A comparative in-vitro study

    Wikner, Johannes; Hanken, Henning; Eulenburg, Christine; Heiland, Max; Groebe, Alexander; Assaf, Alexandre Thomas; Riecke, Bjoern; Friedrich, Reinhard E.

    2016-01-01

    Objective. To discriminate clinically relevant aberrance, the accuracy of linear measurements in three-dimensional (3D) reconstructed datasets was investigated. Materials and methods. Three partly edentulous human skulls were examined. Landmarks were defined prior to acquisition. Two CBCT-scanners

  1. Best linear unbiased prediction of genomic breeding values using a trait-specific marker-derived relationship matrix.

    Zhe Zhang

    2010-09-01

    Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.

  2. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  3. Synthesizing Configurable Biochemical Implementation of Linear Systems from Their Transfer Function Specifications.

    Tai-Yin Chiu

    Full Text Available The ability to engineer synthetic systems in the biochemical context is constantly being improved and has a profound societal impact. Linear system design is one of the most pervasive methods applied in control tasks, and its biochemical realization has been proposed by Oishi and Klavins and advanced further in recent years. However, several technical issues remain unsolved. Specifically, the design process is not fully automated from specification at the transfer function level, systems once designed often lack dynamic adaptivity to environmental changes, matching rate constants of reactions is not always possible, and implementation may be approximative and greatly deviate from the specifications. Building upon the work of Oishi and Klavins, this paper overcomes these issues by introducing a design flow that transforms a transfer-function specification of a linear system into a set of chemical reactions, whose input-output response precisely conforms to the specification. This system is implementable using the DNA strand displacement technique. The underlying configurability is embedded into primitive components and template modules, and thus the entire system is adaptive. Simulation of DNA strand displacement implementation confirmed the feasibility and superiority of the proposed synthesis flow.

  4. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.; Li, H.; Siddiqui, M. S.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health System, 2799 West Brand Boulevard, Detroit, Michigan 48202 (United States); Scheib, S. G.; Schmelzer, P. [Varian Medical System, Täfernstrasse 7, Dättwil AG 5405 (Switzerland)

    2016-05-15

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2 mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.

  5. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    Wen, N.; Snyder, K. C.; Qin, Y.; Li, H.; Siddiqui, M. S.; Chetty, I. J.; Scheib, S. G.; Schmelzer, P.

    2016-01-01

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2 mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.

  6. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  7. The Relationship between a Linear Combination of Intelligence, Musical Background, Rhythm Ability and Tapping Ability to Typewriting Speed and Accuracy.

    Fante, Cheryl H.

    This study was conducted in an attempt to identify any predictor or combination of predictors of a beginning typewriting student's success. Variables of intelligence, rhythmic ability, musical background, and tapping ability were combined to study their relationship to typewriting speed and accuracy. A sample of 109 high school students was…

  8. Accuracy assessment of the linear Poisson-Boltzmann equation and reparametrization of the OBC generalized Born model for nucleic acids and nucleic acid-protein complexes.

    Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro

    2015-04-05

    The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.

  9. A method for estimation of accuracy of dose delivery with dynamic slit windows in medical linear accelerators

    Ravichandran, R.; Binukumar, J.P.; Sivakumar, S.S.; Krishnamurthy, K.; Davis, C.A.

    2008-01-01

    Intensity-modulated radiotherapy (IMRT) clinical dose delivery is based on computer-controlled multileaf movements at different velocities. To test the accuracy of modulation of the beam periodically, quality assurance (QA) methods are necessary. Using a cylindrical phantom, dose delivery was checked at a constant geometry for sweeping fields. Repeated measurements with an in-house designed methodology over a period of 1 year indicate that the method is very sensitive to check the proper functioning of such dose delivery in medical linacs. A cylindrical perspex phantom with facility to accurately position a 0.6- cc (FC 65) ion chamber at constant depth at isocenter, (SA 24 constancy check tool phantom for MU check, Scanditronix Wellhofer) was used. Dosimeter readings were integrated for 4-mm, 10-mm, 20-mm sweeping fields and for 3 angular positions of the gantry periodically. Consistency of standard sweeping field output (10-mm slit width) and the ratios of outputs against other slit widths over a long period were reported. A 10-mm sweeping field output was found reproducible within an accuracy of 0.03% (n = 25) over 1 year. Four-millimeter, 20-mm outputs expressed as ratio with respect to 10- mm sweep output remained within a mean deviation of 0.2% and 0.03% respectively. Outputs at 3 gantry angles remained within 0.5%, showing that the effect of dynamic movements of multileaf collimator (MLC) on the output is minimal for angular positions of gantry. This method of QA is very simple and is recommended in addition to individual patient QA measurements, which reflect the accuracy of dose planning system. In addition to standard output and energy checks of linacs, the above measurements can be complemented so as to check proper functioning of multileaf collimator for dynamic field dose delivery. (author)

  10. Effect of conductance linearity and multi-level cell characteristics of TaOx-based synapse device on pattern recognition accuracy of neuromorphic system

    Sung, Changhyuck; Lim, Seokjae; Kim, Hyungjun; Kim, Taesu; Moon, Kibong; Song, Jeonghwan; Kim, Jae-Joon; Hwang, Hyunsang

    2018-03-01

    To improve the classification accuracy of an image data set (CIFAR-10) by using analog input voltage, synapse devices with excellent conductance linearity (CL) and multi-level cell (MLC) characteristics are required. We analyze the CL and MLC characteristics of TaOx-based filamentary resistive random access memory (RRAM) to implement the synapse device in neural network hardware. Our findings show that the number of oxygen vacancies in the filament constriction region of the RRAM directly controls the CL and MLC characteristics. By adopting a Ta electrode (instead of Ti) and the hot-forming step, we could form a dense conductive filament. As a result, a wide range of conductance levels with CL is achieved and significantly improved image classification accuracy is confirmed.

  11. Third trimester ultrasound for fetal macrosomia: optimal timing and institutional specific accuracy.

    Parikh, Laura I; Iqbal, Sara N; Jelin, Angie C; Overcash, Rachael T; Tefera, Eshetu; Fries, Melissa H

    2017-11-28

    To determine the performance of third trimester ultrasound in women with suspected fetal macrosomia. We performed a retrospective cohort study of fetal ultrasounds from January 2004 to December 2014 with estimated fetal weight (EFW) between 4000 and 5000 g. We determined accuracy of birth weight prediction for ultrasound performed at less than and greater than 38 weeks, accounting for diabetic status and time between ultrasound and delivery. There were 405 ultrasounds evaluated. One hundred and twelve (27.7%) were performed at less than 38 weeks, 293 (72.3%) at greater than 38 weeks, and 91 (22.5%) were performed in diabetics. Sonographic identification of EFW over 4000 g at less than 38 weeks was associated with higher correlation between EFW and birth weight than ultrasound performed after 38 weeks (71.5 versus 259.4 g, p < .024). EFW to birth weight correlation was within 1.7% of birth weight for ultrasound performed less than 38 weeks and within 6.5% of birth weight for ultrasound performed at greater than 38 weeks. Identification of EFW with ultrasound performed less than 38 weeks has greater reliability of predicting fetal macrosomia at birth than measurements performed later in gestation. EFW to birth weight correlation was more accurate than previous reports.

  12. Characterization and specificity of the linear epitope of the enterovirus 71 VP2 protein

    Kiener Tanja K

    2012-02-01

    Full Text Available Abstract Background Enterovirus 71 (EV71 has emerged as a major causative agent of hand, foot and mouth disease in the Asia-Pacific region over the last decade. Hand, foot and mouth disease can be caused by different etiological agents from the enterovirus family, mainly EV71 and coxsackieviruses, which are genetically closely related. Nevertheless, infection with EV71 may occasionally lead to high fever, neurologic complications and the emergence of a rapidly fatal syndrome of pulmonary edema associated with brainstem encephalitis. The rapid progression and high mortality of severe EV71 infection has highlighted the need for EV71-specific diagnostic and therapeutic tools. Monoclonal antibodies are urgently needed to specifically detect EV71 antigens from patient specimens early in the infection process. Furthermore, the elucidation of viral epitopes will contribute to the development of targeted therapeutics and vaccines. Results We have identified the monoclonal antibody 7C7 from a screen of hybridoma cells derived from mice immunized with the EV71-B5 strain. The linear epitope of 7C7 was mapped to amino acids 142-146 (EDSHP of the VP2 capsid protein and was characterized in detail. Mutational analysis of the epitope showed that the aspartic acid to asparagine mutation of the EV71 subgenogroup A (BrCr strain did not interfere with antibody recognition. In contrast, the serine to threonine mutation at position 144 of VP2, present in recently emerged EV71-C4 China strains, abolished antigenicity. Mice injected with this virus strain did not produce any antibodies against the VP2 protein. Immunofluorescence and Western blotting confirmed that 7C7 specifically recognized EV71 subgenogroups and did not cross-react to Coxsackieviruses 4, 6, 10, and 16. 7C7 was successfully used as a detection antibody in an antigen-capture ELISA assay. Conclusions Detailed mapping showed that the VP2 protein of Enterovirus 71 contains a single, linear, non

  13. Inclusion of Population-specific Reference Panel from India to the 1000 Genomes Phase 3 Panel Improves Imputation Accuracy.

    Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R

    2017-07-27

    Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.

  14. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study.

    Sallent, A; Vicente, M; Reverté, M M; Lopez, A; Rodríguez-Baeza, A; Pérez-Domínguez, M; Velez, R

    2017-10-01

    To assess the accuracy of patient-specific instruments (PSIs) versus standard manual technique and the precision of computer-assisted planning and PSI-guided osteotomies in pelvic tumour resection. CT scans were obtained from five female cadaveric pelvises. Five osteotomies were designed using Mimics software: sacroiliac, biplanar supra-acetabular, two parallel iliopubic and ischial. For cases of the left hemipelvis, PSIs were designed to guide standard oscillating saw osteotomies and later manufactured using 3D printing. Osteotomies were performed using the standard manual technique in cases of the right hemipelvis. Post-resection CT scans were quantitatively analysed. Student's t -test and Mann-Whitney U test were used. Compared with the manual technique, PSI-guided osteotomies improved accuracy by a mean 9.6 mm (p 5 mm and 27% (n = 8) were > 10 mm. In the PSI cases, deviations were 10% (n = 3) and 0 % (n = 0), respectively. For angular deviation from pre-operative plans, we observed a mean improvement of 7.06° (p Cite this article : A. Sallent, M. Vicente, M. M. Reverté, A. Lopez, A. Rodríguez-Baeza, M. Pérez-Domínguez, R. Velez. How 3D patient-specific instruments improve accuracy of pelvic bone tumour resection in a cadaveric study. Bone Joint Res 2017;6:577-583. DOI: 10.1302/2046-3758.610.BJR-2017-0094.R1. © 2017 Sallent et al.

  15. Towards sophisticated learning from EHRs: increasing prediction specificity and accuracy using clinically meaningful risk criteria.

    Vasiljeva, Ieva; Arandjelovic, Ognjen

    2016-08-01

    Computer based analysis of Electronic Health Records (EHRs) has the potential to provide major novel insights of benefit both to specific individuals in the context of personalized medicine, as well as on the level of population-wide health care and policy. The present paper introduces a novel algorithm that uses machine learning for the discovery of longitudinal patterns in the diagnoses of diseases. Two key technical novelties are introduced: one in the form of a novel learning paradigm which enables greater learning specificity, and another in the form of a risk driven identification of confounding diagnoses. We present a series of experiments which demonstrate the effectiveness of the proposed techniques, and which reveal novel insights regarding the most promising future research directions.

  16. Quantification of Cooperativity in Heterodimer-DNA Binding Improves the Accuracy of Binding Specificity Models*

    Isakova, Alina; Berset, Yves; Hatzimanikatis, Vassily; Deplancke, Bart

    2016-01-01

    Many transcription factors (TFs) have the ability to cooperate on DNA elements as heterodimers. Despite the significance of TF heterodimerization for gene regulation, a quantitative understanding of cooperativity between various TF dimer partners and its impact on heterodimer DNA binding specificity models is still lacking. Here, we used a novel integrative approach, combining microfluidics-steered measurements of dimer-DNA assembly with mechanistic modeling of the implicated protein-protein-DNA interactions to quantitatively interrogate the cooperative DNA binding behavior of the adipogenic peroxisome proliferator-activated receptor γ (PPARγ):retinoid X receptor α (RXRα) heterodimer. Using the high throughput MITOMI (mechanically induced trapping of molecular interactions) platform, we derived equilibrium DNA binding data for PPARγ, RXRα, as well as the PPARγ:RXRα heterodimer to more than 300 target DNA sites and variants thereof. We then quantified cooperativity underlying heterodimer-DNA binding and derived an integrative heterodimer DNA binding constant. Using this cooperativity-inclusive constant, we were able to build a heterodimer-DNA binding specificity model that has superior predictive power than the one based on a regular one-site equilibrium. Our data further revealed that individual nucleotide substitutions within the target site affect the extent of cooperativity in PPARγ:RXRα-DNA binding. Our study therefore emphasizes the importance of assessing cooperativity when generating DNA binding specificity models for heterodimers. PMID:26912662

  17. Quantification of Cooperativity in Heterodimer-DNA Binding Improves the Accuracy of Binding Specificity Models.

    Isakova, Alina; Berset, Yves; Hatzimanikatis, Vassily; Deplancke, Bart

    2016-05-06

    Many transcription factors (TFs) have the ability to cooperate on DNA elements as heterodimers. Despite the significance of TF heterodimerization for gene regulation, a quantitative understanding of cooperativity between various TF dimer partners and its impact on heterodimer DNA binding specificity models is still lacking. Here, we used a novel integrative approach, combining microfluidics-steered measurements of dimer-DNA assembly with mechanistic modeling of the implicated protein-protein-DNA interactions to quantitatively interrogate the cooperative DNA binding behavior of the adipogenic peroxisome proliferator-activated receptor γ (PPARγ):retinoid X receptor α (RXRα) heterodimer. Using the high throughput MITOMI (mechanically induced trapping of molecular interactions) platform, we derived equilibrium DNA binding data for PPARγ, RXRα, as well as the PPARγ:RXRα heterodimer to more than 300 target DNA sites and variants thereof. We then quantified cooperativity underlying heterodimer-DNA binding and derived an integrative heterodimer DNA binding constant. Using this cooperativity-inclusive constant, we were able to build a heterodimer-DNA binding specificity model that has superior predictive power than the one based on a regular one-site equilibrium. Our data further revealed that individual nucleotide substitutions within the target site affect the extent of cooperativity in PPARγ:RXRα-DNA binding. Our study therefore emphasizes the importance of assessing cooperativity when generating DNA binding specificity models for heterodimers. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  18. A comparative evaluation of linear dimensional accuracy of the dies obtained using three conceptually different die systems in the fabrication of implant prosthesis: An in vitro study

    Manawar Ahmad

    2014-01-01

    Purpose: The purpose of the study was to evaluate the linear dimensional accuracy between the implant master die and three conceptually different die systems such as Pindex system, Accu-trac precision die system, and Conventional brass dowel pin system. Materials and Methods: Thirty impressions of implant master die were made with polyether impression material. Ten experimental implant casts were fabricated for each of the three different die systems tested: Accu-trac precision die tray system, Pindex system, and conventional brass dowel pin system. The solid experimental casts were sectioned and then removed from the die system 30 times. Linear distances between all six possible distances were measured from one centre of the transfer coping to the other, using a co-ordinate measuring machine in millimeters up to accuracy of 0.5 microns. Data were tabulated and statistically analyzed by Binomial non parametric test using SPSS version 15. Results: Significant differences were found for distance A-B (P = 0.002, A-C ( P = 0.002, A-D (P value = 0.002, and B-D ( P = 0.021 in Conventional Dowel pin system however for Accu-trac precision die tray system, it was significant only for distance A-D (P = 0.002 but for Pindex system it was non-significant for all the distances measured. Conclusion: Within the limitations of this study, use of Pindex system is recommended when sectioned dies are needed for a multi implant retained prosthesis.

  19. Experimental study of sector and linear array ultrasound accuracy and the influence of navigated 3D-reconstruction as compared to MRI in a brain tumor model.

    Siekmann, Max; Lothes, Thomas; König, Ralph; Wirtz, Christian Rainer; Coburger, Jan

    2018-03-01

    Currently, intraoperative ultrasound in brain tumor surgery is a rapidly propagating option in imaging technology. We examined the accuracy and resolution limits of different ultrasound probes and the influence of 3D-reconstruction in a phantom and compared these results to MRI in an intraoperative setting (iMRI). An agarose gel phantom with predefined gel targets was examined with iMRI, a sector (SUS) and a linear (LUS) array probe with two-dimensional images. Additionally, 3D-reconstructed sweeps in perpendicular directions were made of every target with both probes, resulting in 392 measurements. Statistical calculations were performed, and comparative boxplots were generated. Every measurement of iMRI and LUS was more precise than SUS, while there was no apparent difference in height of iMRI and 3D-reconstructed LUS. Measurements with 3D-reconstructed LUS were always more accurate than in 2D-LUS, while 3D-reconstruction of SUS showed nearly no differences to 2D-SUS in some measurements. We found correlations of 3D-reconstructed SUS and LUS length and width measurements with 2D results in the same image orientation. LUS provides an accuracy and resolution comparable to iMRI, while SUS is less exact than LUS and iMRI. 3D-reconstruction showed the potential to distinctly improve accuracy and resolution of ultrasound images, although there is a strong correlation with the sweep direction during data acquisition.

  20. [Diagnostic Accuracy of the LiSe-DaZ for Children with Specific Language Impairment].

    Stephan, T; Keilmann, A

    2015-12-01

    Currently, only few tests for the development of speech and language exist for bi- or multilingual children in Germany. One of those, the LiSe-DaZ (Linguistic performance measurement - German as a second language), was examined in a prospective study regarding its practicability and the sensitivity to detect children with specific language impairment in a group of children aged 5 to 7 who suffered from a severe language impairment according to clinical tests. 74 children (mean age: 60 months; 46% monolingual German-speaking; 54% bi- or multilingual) with severe specific language impairment were examined with the LiSe-DaZ in addition to the clinical established diagnostic during their in-patient stay in the hospital. The children, on average, showed in the receptive language abilities (LiSe-DaZ vs. TROG-D), the expressive vocabulary (LiSe-DaZ vs. AWST-R or WWT) and in the use of prepositions (LiSe-DaZ vs. Ravensburger Dysgrammatical clinical trial) significantly (pchildren were diagnosed as language impaired by clinically established tests whereas the LiSe-DaZ considered the children's language development to be normal. This difference was consistently more prominent for children with German as a second language. Compared with the clinically established tests, the informative value of the LiSe-DaZ turned out to be insufficient. The LiSe-DaZ does not detect children with the need of language therapy. Nevertheless, a norming of the established speech tests for bi- or multilingual children would be desirable to avoid unfounded judgements. © Georg Thieme Verlag KG Stuttgart · New York.

  1. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  2. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved

  3. Poster - Thur Eve - 50: Planning and delivery accuracy of stereotactic radiosurgery with Tomotherapy as compared to linear-accelerator and robotic based radiosurgery.

    Thakur, V; Soisson, E; Ruo, R; Doucet, R; Parker, W; Seuntjens, J

    2012-07-01

    This study includes planning and delivery comparison of three stereotactic radiosurgery techniques : Helical Tomotherapy (HT), circular collimator-based Linear-accelerator and robotic-radiosurgery. Plans were generated for two spherical targets of diameter 6 mm and 10 mm contoured at the center of a Lucite phantom, using similar planning constrains. Planning comparison showed that average conformality (0-1best) for Linear-accelerator, robotic-radiosurgery and HT was 1.43, 1.24, and 1.77 and gradient index (less is better) was 2.72, 4.50 and 13.56 respectively. For delivery comparison, plans were delivered to radiochromic film and measured dose was compared with the planned dose. For Linear-accelerator and robotic-radiosurgery more than 99% pixels-passing a gamma criteria of 3% dose difference and 1 mm distance to agreement where as for HT this value was as low as 40% for off-axis targets. Further investigation of the delivery accuracy as a function of the location of the target with in the bore was initiated using small volume A1SL (0.057 cm 3 ) and MicroLion liquid ion chamber (0.0017 cm 3 ). Point dose measurements for targets located at the center and 10 cm away from the center of the bore showed that delivered dose varied by more than 15% for targets placed away from the center of the bore as opposed to at the center. In conclusion, Linear-accelerator and the robotic-radiosurgery techniques showed preferable gradient and conformality. For HT, point dose measurements were significantly lower than predicted by the TPS when the target was positioned away from the isocenter, while they were found to be higher at isocenter. © 2012 American Association of Physicists in Medicine.

  4. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, pJ-difference editing method (adjusted R-squared: 0.908, pJ-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (pJ-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  5. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  6. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4 T

    Neuberger, Ulf; Fischer, Manuel; Heiland, Sabine [Univ. of Heidelberg Medical Center, Heidelberg (Germany). Dept. of Neuroradiology; Univ. of Heidelberg Medical Center, Heidelberg (Germany). Div. of Experimental Radiology; Kickingereder, Philipp; Bendszus, Martin [Univ. of Heidelberg Medical Center, Heidelberg (Germany). Dept. of Neuroradiology; Helluy, Xavier [Bochum Univ. (Germany). Dept. of Biopsychology

    2017-07-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4 T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p < 0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p < 0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p < 0.0001). Moreover, with J-difference editing 2HG was discernible down to 1 mM, whereas with the PRESS method 2HG values were not discernable below 2 mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing.

  7. Sensitivity, specificity, predictive value and accuracy of ultrasonography in pregnancy rate prediction in Sahelian goats after progesterone impregnated sponge synchronization

    Justin Kouamo

    2014-09-01

    Full Text Available Aim: This study was aimed to evaluate the sensitivity, specificity, predictive value and accuracy of ultrasonography in pregnancy rate (PR prediction in Sahelian goats after progesterone impregnated sponge synchronization within the framework of caprine artificial insemination (AI program in Fatick (Senegal. Materials and Methods: Of 193 candidate goats in AI program, 167 were selected (day 50 in six villages. Estrus was synchronized by progesterone impregnated sponges installed for 11 days. Two days before the time of sponge removal (day 4, each goat was treated with 500 IU of equine chorionic gonadotropin and 50 μg of dcloprostenol. All goats were inseminated (day 0 with alpine goat semen from France at 45±3 h after sponge removal (day 2. Real-time B-mode ultrasonography was performed at day 50, day 13, day 0, day 40 and day 60 post-AI. Results: Selection rate, estrus response rate, AI rate, PR at days 40 and days 60 were 86.53%; 71.85%; 83.34%; 51% and 68% (p<0.05 respectively. Value of sensitivity, specificity, positive and negative predictive value, accuracy, total conformity, conformity of correct positive, conformity of correct negative and discordance of pregnancy diagnosis by trans-abdominal ultrasonography (TU were 98.03%; 63.26%; 73.52%; 3.12%; 81%; 81%; 50%; 31% and 19%, respectively. Conclusion: These results indicate that the TU can be performed in goats under traditional condition and emphasized the importance of re-examination of goats with negative or doubtful TU diagnoses performed at day 40 post-AI.

  8. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy.

    Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A

    2013-02-01

    Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined

  9. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-01-01

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors’ unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  10. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A. [Department of Radiotherapy, Academic Medical Center, Meiberdreef 9, 1105 AZ Amsterdam (Netherlands); Bondar, L.; Zolnay, A. G.; Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2013-02-15

    Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight

  11. The Use of Graphs in Specific Situations of the Initial Conditions of Linear Differential Equations

    Buendía, Gabriela; Cordero, Francisco

    2013-01-01

    In this article, we present a discussion on the role of graphs and its significance in the relation between the number of initial conditions and the order of a linear differential equation, which is known as the initial value problem. We propose to make a functional framework for the use of graphs that intends to broaden the explanations of the…

  12. Clinical accuracy of a patient-specific femoral osteotomy guide in minimally-invasive posterior hip arthroplasty.

    Schneider, Adrian K; Pierrepont, Jim W; Hawdon, Gabrielle; McMahon, Stephen

    2018-04-01

    Patient specific guides can be a valuable tool in improving the precision of planned femoral neck osteotomies, especially in minimally invasive hip surgery, where bony landmarks are often inaccessible. The aim of our study was to validate the accuracy of a novel patient specific femoral osteotomy guide for THR through a minimally invasive posterior approach, the direct superior approach (DSA). As part of our routine preoperative planning 30 patients underwent low dose CT scans of their arthritic hip. 3D printed patient specific femoral neck osteotomy guides were then produced. Intraoperatively, having cleared all soft tissue from the postero-lateral neck of the enlocated hip, the guide was placed and pinned onto the posterolateral femoral neck. The osteotomy was performed using an oscillating saw and the uncemented hip components were implanted as per routine. Postoperatively, the achieved level of the osteotomy at the medial calcar was compared with the planned level of resection using a 3D/2D matching analysis (Mimics X-ray module, Materialise, Belgium). A total of 30 patients undergoing uncemented Trinity™ acetabular and TriFit TS™ femoral component arthroplasty (Corin, UK) were included in our analysis. All but one of our analysed osteotomies were found to be within 3 mm from the planned height of osteotomy. In one patient the level of osteotomy deviated 5 mm below the planned level of resection. Preoperative planning and the use of patient specific osteotomy guides provides an accurate method of performing femoral neck osteotomies in minimally invasive hip arthroplasty using the direct superior approach. IV (Case series).

  13. The Accuracy of 3D Optical Reconstruction and Additive Manufacturing Processes in Reproducing Detailed Subject-Specific Anatomy

    Paolo Ferraiuoli

    2017-10-01

    Full Text Available 3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI, computed tomography (CT or echocardiography images. Although these modalities allow imaging of the tissues in vivo, assessment of quality of the reconstruction is limited by the lack of a reference geometry as the subject-specific anatomy is unknown prior to image acquisition. In this work, an optical method based on 3D digital image correlation (3D-DIC techniques is used to reconstruct the shape of the surface of an ex vivo porcine heart. This technique requires two digital charge-coupled device (CCD cameras to provide full-field shape measurements and to generate a standard tessellation language (STL file of the sample surface. The aim of this work was to quantify the error of 3D-DIC shape measurements using the additive manufacturing process. The limitations of 3D printed object resolution, the discrepancy in reconstruction of the surface of cardiac soft tissue and a 3D printed model of the same surface were evaluated. The results obtained demonstrated the ability of the 3D-DIC technique to reconstruct localised and detailed features on the cardiac surface with sub-millimeter accuracy.

  14. Methodological quality of diagnostic accuracy studies on non-invasive coronary CT angiography: influence of QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) items on sensitivity and specificity

    Schueler, Sabine; Walther, Stefan; Schuetz, Georg M. [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite Medical School, Department of Radiology, Berlin (Germany); Schlattmann, Peter [University Hospital of Friedrich Schiller University Jena, Department of Medical Statistics, Informatics, and Documentation, Jena (Germany); Dewey, Marc [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite Medical School, Department of Radiology, Berlin (Germany); Charite, Institut fuer Radiologie, Berlin (Germany)

    2013-06-15

    To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item (''Uninterpretable Results'') showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with ''no fulfilment'' increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. (orig.)

  15. Methodological quality of diagnostic accuracy studies on non-invasive coronary CT angiography: influence of QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) items on sensitivity and specificity

    Schueler, Sabine; Walther, Stefan; Schuetz, Georg M.; Schlattmann, Peter; Dewey, Marc

    2013-01-01

    To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item (''Uninterpretable Results'') showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with ''no fulfilment'' increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. (orig.)

  16. Patient-specific guides do not improve accuracy in total knee arthroplasty: a prospective randomized controlled trial.

    Victor, Jan; Dujardin, Jan; Vandenneucker, Hilde; Arnout, Nele; Bellemans, Johan

    2014-01-01

    Recently, patient-specific guides (PSGs) have been introduced, claiming a significant improvement in accuracy and reproducibility of component positioning in TKA. Despite intensive marketing by the manufacturers, this claim has not yet been confirmed in a controlled prospective trial. We (1) compared three-planar component alignment and overall coronal mechanical alignment between PSG and conventional instrumentation and (2) logged the need for applying changes in the suggested position of the PSG. In this randomized controlled trial, we enrolled 128 patients. In the PSG cohort, surgical navigation was used as an intraoperative control. When the suggested cut deviated more than 3° from target, the use of PSG was abandoned and marked as an outlier. When cranial-caudal position or size was adapted, the PSG was marked as modified. All patients underwent long-leg standing radiography and CT scan. Deviation of more than 3° from the target in any plane was defined as an outlier. The PSG and conventional cohorts showed similar numbers of outliers in overall coronal alignment (25% versus 28%; p = 0.69), femoral coronal alignment (7% versus 14%) (p = 0.24), and femoral axial alignment (23% versus 17%; p = 0.50). There were more outliers in tibial coronal (15% versus 3%; p = 0.03) and sagittal 21% versus 3%; p = 0.002) alignment in the PSG group than in the conventional group. PSGs were abandoned in 14 patients (22%) and modified in 18 (28%). PSGs do not improve accuracy in TKA and, in our experience, were somewhat impractical in that the procedure needed to be either modified or abandoned with some frequency.

  17. Linear expansion, specific heat and thermodynamic properties of the CdTl2Te4 compound

    Karimov, S.K.

    1979-01-01

    Presented are the results of studying temperature dependence of the thermal expansion coefficient for poly- and monocrystalline samples of CdTl 2 Te 4 . The coefficient of linear expansion in a perpendicular direction to the axis of growth (0010) is shown to be larger than the coefficient of linear expansion along the axis. Temperature dependence of thermal capacity (Csub(p)(T)) has been obtained, which is used to plot tables of adjusted values of Csub(p)(T); the values of entropy and enthalpy are calculated. Standard values of these parameters are as follows: Csub(p)=42.90 cal/molxgrad; Ssub(298.15K)sup(.)=78.95+-0.32 cal/mol, and Δsub(298.15K)sup(.)=10629+-31 cal/mol. Lattice contribution and thermal expansion contribution into thermal capacity are calculated. Determined are Debye characteristic temperature, isothermal coefficient of compressibility, and Grueneisen constant. The calculations testify to the prevalence of the repulsive force along the axis (0010) over the attractive force

  18. Single shot trajectory design for region-specific imaging using linear and nonlinear magnetic encoding fields.

    Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2013-09-01

    It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.

  19. Region specific optimization of continuous linear attenuation coefficients based on UTE (RESOLUTE)

    Ladefoged, Claes N; Benoit, Didier; Law, Ian

    2015-01-01

    The reconstruction of PET brain data in a PET/MR hybrid scanner is challenging in the absence of transmission sources, where MR images are used for MR-based attenuation correction (MR-AC). The main challenge of MR-AC is to separate bone and air, as neither have a signal in traditional MR images......-valued linear attenuation coefficients in bone that provides accurate reconstructed PET image data. A total of 164 [(18)F]FDG PET/MR patients were included in this study, of which 10 were used for training. MR-AC was based on either standard CT (reference), UTE or our method (RESOLUTE). The reconstructed PET...... on the reconstructed PET images, as well as limiting the number and extent of the outliers....

  20. Verification of Accuracy of CyberKnife Tumor-tracking Radiation Therapy Using Patient-specific Lung Phantoms

    Jung, Jinhong; Song, Si Yeol; Yoon, Sang Min; Kwak, Jungwon; Yoon, KyoungJun; Choi, Wonsik; Jeong, Seong-Yun; Choi, Eun Kyung; Cho, Byungchul

    2015-01-01

    Purpose: To investigate the accuracy of the CyberKnife Xsight Lung Tracking System (XLTS) compared with that of a fiducial-based target tracking system (FTTS) using patient-specific lung phantoms. Methods and Materials: Three-dimensional printing technology was used to make individualized lung phantoms that closely mimicked the lung anatomy of actual patients. Based on planning computed tomographic data from 6 lung cancer patients who underwent stereotactic ablative radiation therapy using the CyberKnife, the volume above a certain Hounsfield unit (HU) was assigned as the structure to be filled uniformly with polylactic acid material by a 3-dimensional printer (3D Edison, Lokit, Korea). We evaluated the discrepancies between the measured and modeled target positions, representing the total tracking error, using 3 log files that were generated during each treatment for both the FTTS and the XLTS. We also analyzed the γ index between the film dose measured under the FTTS and XLTS. Results: The overall mean values and standard deviations of total tracking errors for the FTTS were 0.36 ± 0.39 mm, 0.15 ± 0.64 mm, and 0.15 ± 0.62 mm for the craniocaudal (CC), left–right (LR), and anteroposterior (AP) components, respectively. Those for the XLTS were 0.38 ± 0.54 mm, 0.13 ± 0.18 mm, and 0.14 ± 0.37 mm for the CC, LR, and AP components, respectively. The average of γ passing rates was 100% for the criteria of 3%, 3 mm; 99.6% for the criteria of 2%, 2 mm; and 86.8% for the criteria of 1%, 1 mm. Conclusions: The XLTS has segmentation accuracy comparable with that of the FTTS and small total tracking errors

  1. Verification of Accuracy of CyberKnife Tumor-tracking Radiation Therapy Using Patient-specific Lung Phantoms

    Jung, Jinhong [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Department of Radiation Oncology, Kyung Hee University Medical Center, Kyung Hee University School of Medicine, Seoul (Korea, Republic of); Song, Si Yeol, E-mail: coocoori@gmail.com [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Yoon, Sang Min; Kwak, Jungwon; Yoon, KyoungJun [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Wonsik [Department of Radiation Oncology, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung (Korea, Republic of); Jeong, Seong-Yun [Asan Institute for Life Science, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of); Choi, Eun Kyung; Cho, Byungchul [Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of)

    2015-07-15

    Purpose: To investigate the accuracy of the CyberKnife Xsight Lung Tracking System (XLTS) compared with that of a fiducial-based target tracking system (FTTS) using patient-specific lung phantoms. Methods and Materials: Three-dimensional printing technology was used to make individualized lung phantoms that closely mimicked the lung anatomy of actual patients. Based on planning computed tomographic data from 6 lung cancer patients who underwent stereotactic ablative radiation therapy using the CyberKnife, the volume above a certain Hounsfield unit (HU) was assigned as the structure to be filled uniformly with polylactic acid material by a 3-dimensional printer (3D Edison, Lokit, Korea). We evaluated the discrepancies between the measured and modeled target positions, representing the total tracking error, using 3 log files that were generated during each treatment for both the FTTS and the XLTS. We also analyzed the γ index between the film dose measured under the FTTS and XLTS. Results: The overall mean values and standard deviations of total tracking errors for the FTTS were 0.36 ± 0.39 mm, 0.15 ± 0.64 mm, and 0.15 ± 0.62 mm for the craniocaudal (CC), left–right (LR), and anteroposterior (AP) components, respectively. Those for the XLTS were 0.38 ± 0.54 mm, 0.13 ± 0.18 mm, and 0.14 ± 0.37 mm for the CC, LR, and AP components, respectively. The average of γ passing rates was 100% for the criteria of 3%, 3 mm; 99.6% for the criteria of 2%, 2 mm; and 86.8% for the criteria of 1%, 1 mm. Conclusions: The XLTS has segmentation accuracy comparable with that of the FTTS and small total tracking errors.

  2. Specific heat of the Ising linear chain in a Random field

    Silva, P.R.; Sa Barreto, F.C. de

    1984-01-01

    Starting from correlation identities for the Ising model the effect of a random field on the one dimension version of the model is studied. Explicit results for the magnetization, the two-particle correlation function and the specific heat are obtained for an uncorrelated distribution of the random fields. (Author) [pt

  3. Accuracy of fibula reconstruction using patient-specific CAD/CAM reconstruction plates and dental implants : A new modality for functional reconstruction of mandibular defects

    Schepers, Rutger H.; Raghoebar, Gerry M.; Vissink, Arjan; Stenekes, Martin W.; Kraeima, Joep; Roodenburg, Jan L.; Reintsema, Harry; Witjes, Max J.

    Background: The purpose of this study was to analyze the accuracy of mandibular reconstruction using patient-specific computer-aided designed and computer-aided manufactured (CAD/CAM) reconstruction plates as a guide to place fibula grafts and dental implants in a one-stage procedure using

  4. The accuracy of chest radiographs in the detection of congenital heart disease and in the diagnosis of specific congenital cardiac lesions

    Laya, Bernard F.; Goske, Marilyn J.; Morrison, Stuart; Reid, Janet R.; Swischuck, Leonard; Ey, Elizabeth H.; Murphy, Daniel J.; Lieber, Michael; Obuchowski, Nancy

    2006-01-01

    Congenital heart disease (CHD) is a significant cause of morbidity and mortality in pediatric patients. Traditional teaching holds that specific types of CHD can be diagnosed on the chest radiograph (CXR) through pattern recognition. To determine the accuracy of radiologists in detecting CHD on the CXR. This study was a blinded retrospective review of chest radiographs from 281 patients (<12 years) by five pediatric radiologists from three institutions. Thirteen groups were evaluated that included 12 categories of CHD and a control group of patients without heart disease. Radiographs were assessed for heart size, heart and mediastinal shape and vascularity. Clinical information, angiography, echocardiograms and surgery were used as the gold standard for definitive diagnosis. The average accuracy of the five readers in distinguishing normal from CHD patients was 78% (range of 72% to 82%). The overall measure of accuracy in distinguishing specific congenital cardiac lesions among 13 groups of patients was 71% (range of 63% to 79%). CXR alone is not diagnostic of specific cardiac lesions, with a low accuracy of only 71%. We believe that less emphasis should be placed on the use of radiographs alone in diagnosing specific congenital cardiac lesions. (orig.)

  5. A comparative study of accuracy of linear measurements using cone beam and multi-slice computed tomographies for evaluation of mandibular canal location in dry mandibles.

    Naser, Asieh Zamani; Mehr, Bahar Behdad

    2013-01-01

    Cross- sectional tomograms have been used for optimal pre-operative planning of dental implant placement. The aim of the present study was to assess the accuracy of Cone Beam Computed Tomography (CBCT) measurements of specific distances around the mandibular canal by comparing them to those obtained from Multi-Slice Computed Tomography (MSCT) images. Ten hemi-mandible specimens were examined using CBCT and MSCT. Before imaging, wires were placed at 7 locations between the anterior margin of the third molar and the anterior margin of the second premolar as reference points. Following distances were measured by two observers on each cross-sectional CBCT and MSCT image: Mandibular Width (W), Length (L), Upper Distance (UD), Lower Distance (LD), Buccal Distance (BD), and Lingual Distance (LID). The obtained data were evaluated using SPSS software, applying paired t-test and intra-class correlation coefficient (ICC). There was a significant difference between the values obtained by MSCT and CBCT measurement for all areas such as H, W, UD, LD, BD, and LID, (P < 0.001), with a difference less than 1 mm. The ICC for all distances by both techniques, measured by a single observer with a one week interval and between 2 observers was 99% and 98%, respectively. Comparing the obtained data of both techniques indicates that the difference between two techniques is 2.17% relative to MSCT. The results of this study showed that there is significant difference between measurements obtained by CBCT and MSCT. However, the difference is not clinically significant.

  6. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  7. Diagnostic accuracy of repetition tasks for the identification of specific language impairment (SLI) in bilingual children: evidence from Russian and Hebrew.

    Armon-Lotem, Sharon; Meir, Natalia

    2016-11-01

    Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language

  8. Non-linear partial least square regression increases the estimation accuracy of grass nitrogen and phosphorus using in situ hyperspectral and environmental data

    Ramoelo, Abel

    2013-06-01

    Full Text Available in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only...

  9. Impact of obesity on the predictive accuracy of prostate-specific antigen density and prostate-specific antigen in native Korean men undergoing prostate biopsy.

    Kim, Jae Heon; Doo, Seung Whan; Yang, Won Jae; Lee, Kwang Woo; Lee, Chang Ho; Song, Yun Seob; Jeon, Yoon Su; Kim, Min Eui; Kwon, Soon-Sun

    2014-10-01

    To evaluate the impact of obesity on the biopsy detection of prostate cancer. We retrospectively reviewed data of 1182 consecutive Korean patients (≥50 years) with serum prostate-specific antigen levels of 3-10 ng/mL who underwent initial extended 12-cores biopsy from September 2009 to March 2013. Patients who took medications that were likely to influence the prostate-specific antigen level were excluded. Receiver operating characteristic curves were plotted for prostate-specific antigen and prostate-specific antigen density predicting cancer status among non-obese and obese men. A total of 1062 patients (mean age 67.1 years) were enrolled in the analysis. A total of 230 men (21.7%) had a positive biopsy. In the overall study sample, the area under the receiver operator characteristic curve of serum prostate-specific antigen for predicting prostate cancer on biopsy were 0.584 and 0.633 for non-obese and obese men, respectively (P = 0.234). However, the area under the curve for prostate-specific antigen density in predicting cancer status showed a significant difference (non-obese 0.696, obese 0.784; P = 0.017). There seems to be a significant difference in the ability of prostate-specific antigen density to predict biopsy results between non-obese and obese men. Obesity positively influenced the overall ability of prostate-specific antigen density to predict prostate cancer. © 2014 The Japanese Urological Association.

  10. Sensitivity, Specificity, Predictive Values, and Accuracy of Three Diagnostic Tests to Predict Inferior Alveolar Nerve Blockade Failure in Symptomatic Irreversible Pulpitis

    Daniel Chavarría-Bolaños

    2017-01-01

    Full Text Available Introduction. The inferior alveolar nerve block (IANB is the most common anesthetic technique used on mandibular teeth during root canal treatment. Its success in the presence of preoperative inflammation is still controversial. The aim of this study was to evaluate the sensitivity, specificity, predictive values, and accuracy of three diagnostic tests used to predict IANB failure in symptomatic irreversible pulpitis (SIP. Methodology. A cross-sectional study was carried out on the mandibular molars of 53 patients with SIP. All patients received a single cartridge of mepivacaine 2% with 1 : 100000 epinephrine using the IANB technique. Three diagnostic clinical tests were performed to detect anesthetic failure. Anesthetic failure was defined as a positive painful response to any of the three tests. Sensitivity, specificity, predictive values, accuracy, and ROC curves were calculated and compared and significant differences were analyzed. Results. IANB failure was determined in 71.7% of the patients. The sensitivity scores for the three tests (lip numbness, the cold stimuli test, and responsiveness during endodontic access were 0.03, 0.35, and 0.55, respectively, and the specificity score was determined as 1 for all of the tests. Clinically, none of the evaluated tests demonstrated a high enough accuracy (0.30, 0.53, and 0.68 for lip numbness, the cold stimuli test, and responsiveness during endodontic access, resp.. A comparison of the areas under the curve in the ROC analyses showed statistically significant differences between the three tests (p<0.05. Conclusion. None of the analyzed tests demonstrated a high enough accuracy to be considered a reliable diagnostic tool for the prediction of anesthetic failure.

  11. The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions

    Ganguly, Rumpa; Ramesh, Aruna; Pagni, Sarah [Tufts University School of Dental Medicine, Boston (United States)

    2016-06-15

    The objective of this study was to investigate the effect of varying resolutions of cone-beam computed tomography images on the accuracy of linear measurements of edentulous areas in human cadaver heads. Intact cadaver heads were used to simulate a clinical situation. Fiduciary markers were placed in the edentulous areas of 4 intact embalmed cadaver heads. The heads were scanned with two different CBCT units using a large field of view (13 cm×16 cm) and small field of view (5 cm×8 cm) at varying voxel sizes (0.3 mm, 0.2 mm, and 0.16 mm). The ground truth was established with digital caliper measurements. The imaging measurements were then compared with caliper measurements to determine accuracy. The Wilcoxon signed rank test revealed no statistically significant difference between the medians of the physical measurements obtained with calipers and the medians of the CBCT measurements. A comparison of accuracy among the different imaging protocols revealed no significant differences as determined by the Friedman test. The intraclass correlation coefficient was 0.961, indicating excellent reproducibility. Inter-observer variability was determined graphically with a Bland-Altman plot and by calculating the intraclass correlation coefficient. The Bland-Altman plot indicated very good reproducibility for smaller measurements but larger discrepancies with larger measurements. The CBCT-based linear measurements in the edentulous sites using different voxel sizes and FOVs are accurate compared with the direct caliper measurements of these sites. Higher resolution CBCT images with smaller voxel size did not result in greater accuracy of the linear measurements.

  12. Accuracy of specimen-specific nonlinear finite element analysis for evaluation of distal radius strength in cadaver material.

    Matsuura, Yusuke; Kuniyoshi, Kazuki; Suzuki, Takane; Ogawa, Yasufumi; Sukegawa, Koji; Rokkaku, Tomoyuki; Takahashi, Kazuhisa

    2014-11-01

    Distal radius fracture, which often occurs in the setting of osteoporosis, can lead to permanent deformity and disability. Great effort has been directed toward developing noninvasive methods for evaluating the distal radius strength, with the goal of assessing fracture risk. The aim of this study was to evaluate distal radius strength using a finite element model and to gauge the accuracy of finite element model measurement using cadaver material. Ten wrists were obtained from cadavers with a mean age of 89.5 years at death. CT images of each wrist in an extended position were obtained. CT-based finite element models were prepared with Mechanical Finder software. Fracture on the models was simulated by applying a mechanical load to the palm in a direction parallel to the forearm axis, after which the fracture load and the site at which the fracture began were identified. For comparison, the wrists were fractured using a universal testing machine and the fracture load and the site of fracture were identified. The fracture load was 970.9 N in the finite element model group and 990.0 N in the actual measurement group. The site of the initial fracture was extra-articular to the distal radius in both groups. The finite element model was predictive for distal radius fracture when compared to the actual measurement. In this study, a finite element model for evaluation of distal radius strength was validated and can be used to predict fracture risk. We conclude that a finite element model is useful for the evaluation of distal radius strength. Knowing distal radius strength might avoid distal radius fracture because appropriate antiosteoporotic treatment can be initiated.

  13. The Accuracy of 3D Optical Reconstruction and Additive Manufacturing Processes in Reproducing Detailed Subject-Specific Anatomy

    Paolo Ferraiuoli; Jonathan C. Taylor; Emily Martin; John W. Fenner; Andrew J. Narracott

    2017-01-01

    3D reconstruction and 3D printing of subject-specific anatomy is a promising technology for supporting clinicians in the visualisation of disease progression and planning for surgical intervention. In this context, the 3D model is typically obtained from segmentation of magnetic resonance imaging (MRI), computed tomography (CT) or echocardiography images. Although these modalities allow imaging of the tissues in vivo, assessment of quality of the reconstruction is limited by the lack of a ref...

  14. The application in detection the position accuracy of the multi-leaf collimator of Varian linear accelerator with dynamic therapy log files

    Li Changhu; Xu Liming; Teng Jianjian; Ge Wei; Zhang Jun; Ma Guangdong

    2010-01-01

    Objective: To explorer the application in detection the position accuracy of the multileaf collimator of Varian accelerator with dynamic therapy log files. Methods: A pre-designed MLC format files named PMLC for two Varian accelerators, the dynamic treatment log files were recorded 10 times on a different date, and be converted into the MLC format files named DMLC, compared with the original plan PMLC, so we can analysis two files for each leaf position deviation. In addition, we analysis the repeatability of MLC leaves position accuracy between 10 dynalog files of two accelerators. Results: No statistically significant difference between the average position of the 10 times leaf position of the two accelerators,their were 0.29 -0.29 and 0.29 -0.30 (z = -0.77, P=0.442). About 40%, 30%, 20% and 10% of the leaf position deviation was at ≤0.2 mm, 0.3 mm, 0.5 mm and 0.4 mm, respectively. the maximum value was 0.5 mm. More than 86% of the leaf position are completely coincident between 10 dynamic treatment files of two accelerators. The rate of position deviation no more 0. 05 mm was 96. 6% and 97.3%, respectively. And the maximum value was 0.09 mm. Conclusions: Dynamic treatment log file is a splendid tool in testing the actual position of multi-leaf collimator. The multi-leaf collimator of two accelerators be detected are precise and stabilized. (authors)

  15. Accuracy and reproducibility of simple cross-sectional linear and area measurements of brain structures and their comparison with volume measurements

    Whalley, H.C.; Wardlaw, J.M.

    2001-01-01

    Volumetric measurement of brain structure on brain images is regarded as a gold standard, yet is very time consuming. We wondered whether simple linear and area measurements might be as accurate and reproducible. Two observers independently measured the cross-sectional area of the corpus callosum, lentiform and caudate nuclei, thalamus, amygdalas, hippocampi, lateral and third ventricles, and the width of the sylvian and frontal interhemispheric fissures and brain stem on brain MRI of 55 patients using a program written in-house; one observer also measured the volumes of the basal ganglia, amygdalo-hippocampal complex and ventricular system using Analyze, and performed qualitative assessment of four regions (lateral and third ventricles, cortex, and medial temporal lobe) using the Lieberman score. All measures were performed blinded to all other information. Test objects of known size were also imaged with MRI and measured by the two observers using the in-house program. The true sizes of the test objects were measured using engineering calipers by two observers blind to the MRI results. Differences between the two observers using the same measurement method, and one observer using different methods, were calculated. The simple linear and cross-sectional area measurements were rapid (20 min versus 5 h for volumetric); were highly accurate for test-object measurement versus true size; had excellent intraobserver reliability; and, for most brain structures, the simple measures correlated highly significantly with volumetric measures. The simple measures were in general highly reproducible, the difference (as a percentage of the area or width of a region) between the two raters being around 10 %, range 0.1 %- 14.1 %, (similar to inter-rater variability in previous studies of volume measurements). The simple linear and area measures are reproducible and correlate well with the measured volumes, and there is a considerable time saving with the former. In circumstances

  16. Determining the Cost-Savings Threshold and Alignment Accuracy of Patient-Specific Instrumentation in Total Ankle Replacements.

    Hamid, Kamran S; Matson, Andrew P; Nwachukwu, Benedict U; Scott, Daniel J; Mather, Richard C; DeOrio, James K

    2017-01-01

    Traditional intraoperative referencing for total ankle replacements (TARs) involves multiple steps and fluoroscopic guidance to determine mechanical alignment. Recent adoption of patient-specific instrumentation (PSI) allows for referencing to be determined preoperatively, resulting in less steps and potentially decreased operative time. We hypothesized that usage of PSI would result in decreased operating room time that would offset the additional cost of PSI compared with standard referencing (SR). In addition, we aimed to compare postoperative radiographic alignment between PSI and SR. Between August 2014 and September 2015, 87 patients undergoing TAR were enrolled in a prospectively collected TAR database. Patients were divided into cohorts based on PSI vs SR, and operative times were reviewed. Radiographic alignment parameters were retrospectively measured at 6 weeks postoperatively. Time-driven activity-based costing (TDABC) was used to derive direct costs. Cost vs operative time-savings were examined via 2-way sensitivity analysis to determine cost-saving thresholds for PSI applicable to a range of institution types. Cost-saving thresholds defined the price of PSI below which PSI would be cost-saving. A total of 35 PSI and 52 SR cases were evaluated with no significant differences identified in patient characteristics. Operative time from incision to completion of casting in cases without adjunct procedures was 127 minutes with PSI and 161 minutes with SR ( P cost-savings threshold range at our institution of $863 below which PSI pricing would provide net cost-savings. Two-way sensitivity analysis generated a globally applicable cost-savings threshold model based on institution-specific costs and surgeon-specific time-savings. This study demonstrated equivalent postoperative TAR alignment with PSI and SR referencing systems but with a significant decrease in operative time with PSI. Based on TDABC and associated sensitivity analysis, a cost-savings threshold

  17. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  18. Imaging modalities in the diagnosis of pancreatic adenocarcinoma: A systematic review and meta-analysis of sensitivity, specificity and diagnostic accuracy.

    Toft, James; Hadden, William J; Laurence, Jerome M; Lam, Vincent; Yuen, Lawrence; Janssen, Anna; Pleass, Henry

    2017-07-01

    Pancreatic cancer, primarily pancreatic ductal adenocarcinoma (PDAC), accounts for 2.4% of cancer diagnoses and 5.8% of cancer death annually. Early diagnoses can improve 5-year survival in PDAC. The aim of this systematic review was to determine the sensitivity, specificity and diagnostic accuracy values for MRI, CT, PET&PET/CT, EUS and transabdominal ultrasound (TAUS) in the diagnosis of PDAC. A systematic review was undertaken to identify studies reporting sensitivity, specificity and/or diagnostic accuracy for the diagnosis of PDAC with MRI, CT, PET, EUS or TAUS. Proportional meta-analysis was performed for each modality. A total of 5399 patients, 3567 with PDAC, from 52 studies were included. The sensitivity, specificity and diagnostic accuracy were 93% (95% CI=88-96), 89% (95% CI=82-94) and 90% (95% CI=86-94) for MRI; 90% (95% CI=87-93), 87% (95% CI=79-93) and 89% (95% CI=85-93) for CT; 89% (95% CI=85-93), 70% (95% CI=54-84) and 84% (95% CI=79-89) for PET; 91% (95% CI=87-94), 86% (95% CI=81-91) and 89% (95% CI=87-92) for EUS; and 88% (95% CI=86-90), 94% (95% CI=87-98) and 91% (95% C=87-93) for TAUS. This review concludes all modalities, except for PET, are equivalent within 95% confidence intervals for the diagnosis of PDAC. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. 68Ga-PSMA PET/CT in Patients with Rising Prostatic-Specific Antigen After Definitive Treatment of Prostate Cancer: Detection Efficacy and Diagnostic accuracy.

    Hamed, Maged Abdel Galil; Basha, Mohammad Abd Alkhalik; Ahmed, Hussien; Obaya, Ahmed Ali; Afifi, Amira Hamed Mohamed; Abdelbary, Eman H

    2018-06-20

    68 Ga-prostate-specific membrane antigen-11 ( 68 Ga-PSMA-11) is a recently developed positron emission tomography (PET) tracer that can detect prostate cancer (PC) relapses and metastases with high contrast resolution. The aim of this study was to assess the detection efficacy and diagnostic accuracy of 68 Ga-PSMA PET/CT image in patients with rising prostatic-specific antigen (PSA) after treatment of PC. The present prospective study included 188 patients who exhibited rising of PSA level on a routine follow-up examination after definitive treatment of PC. All patients underwent a 68 Ga-PSMA PET/CT examination. For each patient, we determined the disease stage, the Gleason score, and the maximum standardized uptake value of the local recurrence and extraprostatic metastases. The detection efficacy and diagnostic accuracy of 68 Ga-PSMA PET/CT were established by histopathology and clinical and imaging follow-up as the reference standards. 68 Ga-PSMA PET/CT detected tumour relapse in 165 patients (35 patients had local recurrence, 106 patients had extraprostatic metastases, and 24 patients had combined lesions). The sensitivity, specificity, and accuracy values of 68 Ga-PSMA PET/CT examination in the detection of PC recurrence were 98.8%, 100%, and 98.8%, respectively. 68 Ga-PSMA PET/CT revealed an overall detection rate of 87.8% (165/188) in patients with rising PSA (median of 2.2 ng/mL, and range of 0.01-70 ng/mL). 68 Ga-PSMA PET/CT is a valuable tool for the detection of PC local recurrence or extraprostatic metastases following rising PSA levels after primary definitive therapy and should be incorporated during routine work-up. Copyright © 2018. Published by Elsevier Inc.

  20. The sensitivity and specificity of four questions (HARK to identify intimate partner violence: a diagnostic accuracy study in general practice

    Feder Gene

    2007-08-01

    Full Text Available Abstract Background Intimate partner violence (IPV including physical, sexual and emotional violence, causes short and long term ill-health. Brief questions that reliably identify women experiencing IPV who present in clinical settings are a pre-requisite for an appropriate response from health services to this substantial public health problem. We estimated the sensitivity and specificity of four questions (HARK developed from the Abuse Assessment screen, compared to a 30-item abuse questionnaire, the Composite Abuse Scale (CAS. Methods We administered the four HARK questions and the CAS to women approached by two researchers in general practice waiting rooms in Newham, east London. Inclusions: women aged more than 17 years waiting to see a doctor or nurse, who had been in an intimate relationship in the last year. Exclusions: women who were accompanied by children over four years of age or another adult, too unwell to complete the questionnaires, unable to understand English or unable to give informed consent. Results Two hundred and thirty two women were recruited. The response rate was 54%. The prevalence of current intimate partner violence, within the last 12 months, using the CAS cut off score of ≥3, was 23% (95% C.I. 17% to 28% with pre-test odds of 0.3 (95% C.I. 0.2 to 0.4. The receiver operator characteristic curve demonstrated that a HARK cut off score of ≥1 maximises the true positives whilst minimising the false positives. The sensitivity of the optimal HARK cut-off score of ≥1 was 81% (95% C.I. 69% to 90%, specificity 95% (95% C.I. 91% to 98%, positive predictive value 83% (95% C.I. 70% to 91%, negative predictive value 94% (95% C.I. 90% to 97%, likelihood ratio 16 (95% C.I. 8 to 31 and post-test odds 5. Conclusion The four HARK questions accurately identify women experiencing IPV in the past year and may help women disclose abuse in general practice. The HARK questions could be incorporated into the electronic medical record

  1. Confirming nasogastric tube placement: Is the colorimeter as sensitive and specific as X-ray? A diagnostic accuracy study.

    Mordiffi, Siti Zubaidah; Goh, Mien Li; Phua, Jason; Chan, Yiong-Huak

    2016-09-01

    The effect of delivering enteral nutrition or medications via a nasogastric tube that is inadvertently located in the tracheobronchial tract can cause respiratory complications. Although radiographic examination is accepted as the gold standard for confirming the position of patients' enteral tubes, it is costly, involves risks of radiation, and is not failsafe. Studies using carbon dioxide sensors to detect inadvertent nasogastric tube placements have been conducted in intensive care settings. However, none involved patients in general wards. The objective of this study was to ascertain the diagnostic measure of colorimeter, with radiographic examination as the reference standard, to confirm the location of nasogastric tubes in patients. A prospective observational study of a diagnostic test. This study was conducted in the general wards of an approximately 1100-bed acute care tertiary hospital of an Academic Medical Center in Singapore. Adult patients with nasogastric tubes admitted to the general wards were recruited into the study. The colorimeter was attached to the nasogastric tube to detect for the presence of carbon dioxide, suggestive of a tracheobronchial placement. The exact location of the nasogastric tube was subsequently confirmed by a radiographic examination. A total of 192 tests were undertaken. The colorimeter detected carbon dioxide in 29 tested nasogastric tubes, of which radiographic examination confirmed that four tubes were located in the tracheobronchial tract. The colorimeter failed to detect carbon dioxide in one nasogastric tube that was located in the tracheobronchial tract, thus, demonstrating a sensitivity of 0.80 [95% CI (0.376, 0.964)]. The colorimeter detected absence of carbon dioxide in 163 tested nasogastric tubes in which radiographic examination confirmed 160 gastrointestinal and one tracheobronchial placements, demonstrating a specificity of 0.865 [95% CI (0.808, 0.907)]. The colorimeter detected one tracheobronchial

  2. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  3. Structure of human Rad51 protein filament from molecular modeling and site-specific linear dichroism spectroscopy

    Reymer, A.; Frykholm, K.; Morimatsu, K.; Takahashi, M.; Norden, B.

    2009-01-01

    for central and N-terminal parts of pure (uncomplexed) Rad51 protein by aid of linear dichroism spectroscopy, providing angular orientations of substituted tyrosine residues of Rad51-dsDNA filaments in solution. The structure, validated by comparison

  4. Improving sensitivity of linear regression-based cell type-specific differential expression deconvolution with per-gene vs. global significance threshold.

    Glass, Edmund R; Dozmorov, Mikhail G

    2016-10-06

    The goal of many human disease-oriented studies is to detect molecular mechanisms different between healthy controls and patients. Yet, commonly used gene expression measurements from blood samples suffer from variability of cell composition. This variability hinders the detection of differentially expressed genes and is often ignored. Combined with cell counts, heterogeneous gene expression may provide deeper insights into the gene expression differences on the cell type-specific level. Published computational methods use linear regression to estimate cell type-specific differential expression, and a global cutoff to judge significance, such as False Discovery Rate (FDR). Yet, they do not consider many artifacts hidden in high-dimensional gene expression data that may negatively affect linear regression. In this paper we quantify the parameter space affecting the performance of linear regression (sensitivity of cell type-specific differential expression detection) on a per-gene basis. We evaluated the effect of sample sizes, cell type-specific proportion variability, and mean squared error on sensitivity of cell type-specific differential expression detection using linear regression. Each parameter affected variability of cell type-specific expression estimates and, subsequently, the sensitivity of differential expression detection. We provide the R package, LRCDE, which performs linear regression-based cell type-specific differential expression (deconvolution) detection on a gene-by-gene basis. Accounting for variability around cell type-specific gene expression estimates, it computes per-gene t-statistics of differential detection, p-values, t-statistic-based sensitivity, group-specific mean squared error, and several gene-specific diagnostic metrics. The sensitivity of linear regression-based cell type-specific differential expression detection differed for each gene as a function of mean squared error, per group sample sizes, and variability of the proportions

  5. Best Linear Unbiased Prediction of Genomic Breeding Values Using a Trait-Specific Marker-Derived Relationship Matrix

    Zhe Zhang, Z.; Liu, J.F.; Ding, Z.; Bijma, P.; Koning, de D.J.

    2010-01-01

    With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome,

  6. Specific recognition of linear polyubiquitin by A20 zinc finger 7 is involved in NF-κB regulation

    Tokunaga, Fuminori; Nishimasu, Hiroshi; Ishitani, Ryuichiro; Goto, Eiji; Noguchi, Takuya; Mio, Kazuhiro; Kamei, Kiyoko; Ma, Averil; Iwai, Kazuhiro; Nureki, Osamu

    2012-01-01

    LUBAC (linear ubiquitin chain assembly complex) activates the canonical NF-κB pathway through linear polyubiquitination of NEMO (NF-κB essential modulator, also known as IKKγ) and RIP1. However, the regulatory mechanism of LUBAC-mediated NF-κB activation remains elusive. Here, we show that A20 suppresses LUBAC-mediated NF-κB activation by binding linear polyubiquitin via the C-terminal seventh zinc finger (ZF7), whereas CYLD suppresses it through deubiquitinase (DUB) activity. We determined the crystal structures of A20 ZF7 in complex with linear diubiquitin at 1.70–1.98 Å resolutions. The crystal structures revealed that A20 ZF7 simultaneously recognizes the Met1-linked proximal and distal ubiquitins, and that genetic mutations associated with B cell lymphomas map to the ubiquitin-binding sites. Our functional analysis indicated that the binding of A20 ZF7 to linear polyubiquitin contributes to the recruitment of A20 into a TNF receptor (TNFR) signalling complex containing LUBAC and IκB kinase (IKK), which results in NF-κB suppression. These findings provide new insight into the regulation of immune and inflammatory responses. PMID:23032187

  7. Interspecies hybridization on DNA resequencing microarrays: efficiency of sequence recovery and accuracy of SNP detection in human, ape, and codfish mitochondrial DNA genomes sequenced on a human-specific MitoChip

    Carr Steven M

    2007-09-01

    Full Text Available Abstract Background Iterative DNA "resequencing" on oligonucleotide microarrays offers a high-throughput method to measure intraspecific biodiversity, one that is especially suited to SNP-dense gene regions such as vertebrate mitochondrial (mtDNA genomes. However, costs of single-species design and microarray fabrication are prohibitive. A cost-effective, multi-species strategy is to hybridize experimental DNAs from diverse species to a common microarray that is tiled with oligonucleotide sets from multiple, homologous reference genomes. Such a strategy requires that cross-hybridization between the experimental DNAs and reference oligos from the different species not interfere with the accurate recovery of species-specific data. To determine the pattern and limits of such interspecific hybridization, we compared the efficiency of sequence recovery and accuracy of SNP identification by a 15,452-base human-specific microarray challenged with human, chimpanzee, gorilla, and codfish mtDNA genomes. Results In the human genome, 99.67% of the sequence was recovered with 100.0% accuracy. Accuracy of SNP identification declines log-linearly with sequence divergence from the reference, from 0.067 to 0.247 errors per SNP in the chimpanzee and gorilla genomes, respectively. Efficiency of sequence recovery declines with the increase of the number of interspecific SNPs in the 25b interval tiled by the reference oligonucleotides. In the gorilla genome, which differs from the human reference by 10%, and in which 46% of these 25b regions contain 3 or more SNP differences from the reference, only 88% of the sequence is recoverable. In the codfish genome, which differs from the reference by > 30%, less than 4% of the sequence is recoverable, in short islands ≥ 12b that are conserved between primates and fish. Conclusion Experimental DNAs bind inefficiently to homologous reference oligonucleotide sets on a re-sequencing microarray when their sequences differ by

  8. Are computer numerical control (CNC)-manufactured patient-specific metal templates available for posterior thoracic pedicle screw insertion? Feasibility and accuracy evaluation.

    Kong, Xiangxue; Tang, Lei; Ye, Qiang; Huang, Wenhua; Li, Jianyi

    2017-11-01

    Accurate and safe posterior thoracic pedicle insertion (PTPI) remains a challenge. Patient-specific drill templates (PDTs) created by rapid prototyping (RP) can assist in posterior thoracic pedicle insertion, but pose biocompatibility risks. The aims of this study were to develop alternative PDTs with computer numerical control (CNC) and assess their feasibility and accuracy in assisting PTPI. Preoperative CT images of 31 cadaveric thoracic vertebras were obtained and then the optimal pedicle screw trajectories were planned. The PDTs with optimal screw trajectories were randomly assigned to be designed and manufactured by CNC or RP in each vertebra. With the guide of the CNC- or RP-manufactured PDTs, the appropriate screws were inserted into the pedicles. Postoperative CT scans were performed to analyze any deviations at entry point and midpoint of the pedicles. The CNC group was found to be significant manufacture-time-shortening, and cost-decreasing, when compared with the RP group (P  0.05). The screw positions were grade 0 in 90.3% and grade 1 in 9.7% of the cases in the CNC group and grade 0 in 93.5% and grade 1 in 6.5% of the cases in the RP group (P = 0.641). CNC-manufactured PDTs are viable for assisting in PTPI with good feasibility and accuracy.

  9. The combination of ovarian volume and outline has better diagnostic accuracy than prostate-specific antigen (PSA) concentrations in women with polycystic ovarian syndrome (PCOs).

    Bili, Eleni; Bili, Authors Eleni; Dampala, Kaliopi; Iakovou, Ioannis; Tsolakidis, Dimitrios; Giannakou, Anastasia; Tarlatzis, Basil C

    2014-08-01

    The aim of this study was to determine the performance of prostate specific antigen (PSA) and ultrasound parameters, such as ovarian volume and outline, in the diagnosis of polycystic ovary syndrome (PCOS). This prospective, observational, case-controlled study included 43 women with PCOS, and 40 controls. Between day 3 and 5 of the menstrual cycle, fasting serum samples were collected and transvaginal ultrasound was performed. The diagnostic performance of each parameter [total PSA (tPSA), total-to-free PSA ratio (tPSA:fPSA), ovarian volume, ovarian outline] was estimated by means of receiver operating characteristic (ROC) analysis, along with area under the curve (AUC), threshold, sensitivity, specificity as well as positive (+) and negative (-) likelihood ratios (LRs). Multivariate logistical regression models, using ovarian volume and ovarian outline, were constructed. The tPSA and tPSA:fPSA ratio resulted in AUC of 0.74 and 0.70, respectively, with moderate specificity/sensitivity and insufficient LR+/- values. In the multivariate logistic regression model, the combination of ovarian volume and outline had a sensitivity of 97.7% and a specificity of 97.5% in the diagnosis of PCOS, with +LR and -LR values of 39.1 and 0.02, respectively. In women with PCOS, tPSA and tPSA:fPSA ratio have similar diagnostic performance. The use of a multivariate logistic regression model, incorporating ovarian volume and outline, offers very good diagnostic accuracy in distinguishing women with PCOS patients from controls. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Linear algebra

    Stoll, R R

    1968-01-01

    Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand

  11. HPV genotype-specific concordance between EuroArray HPV, Anyplex II HPV28 and Linear Array HPV Genotyping test in Australian cervical samples

    Alyssa M. Cornall

    2017-12-01

    Full Text Available Purpose: To compare human papillomavirus genotype-specific performance of two genotyping assays, Anyplex II HPV28 (Seegene and EuroArray HPV (EuroImmun, with Linear Array HPV (Roche. Methods: DNA extracted from clinican-collected cervical brush specimens in PreservCyt medium (Hologic, from 403 women undergoing management for detected cytological abnormalities, was tested on the three assays. Genotype-specific agreement were assessed by Cohen's kappa statistic and Fisher's z-test of significance between proportions. Results: Agreement between Linear Array and the other 2 assays was substantial to almost perfect (κ = 0.60 − 1.00 for most genotypes, and was almost perfect (κ = 0.81 – 0.98 for almost all high-risk genotypes. Linear Array overall detected most genotypes more frequently, however this was only statistically significant for HPV51 (EuroArray; p = 0.0497, HPV52 (Anyplex II; p = 0.039 and HPV61 (Anyplex II; p=0.047. EuroArray detected signficantly more HPV26 (p = 0.002 and Anyplex II detected more HPV42 (p = 0.035 than Linear Array. Each assay performed differently for HPV68 detection: EuroArray and LA were in moderate to substantial agreement with Anyplex II (κ = 0.46 and 0.62, respectively, but were in poor disagreement with each other (κ = −0.01. Conclusions: EuroArray and Anyplex II had similar sensitivity to Linear Array for most high-risk genotypes, with slightly lower sensitivity for HPV 51 or 52. Keywords: Human papillomavirus, Genotyping, Linear Array, Anyplex II, EuroArray, Cervix

  12. Evaluation of the accuracy of linear measurements on multi-slice and cone beam computed tomography scans to detect the mandibular canal during bilateral sagittal split osteotomy of the mandible.

    Freire-Maia, B; Machado, V deC; Valerio, C S; Custódio, A L N; Manzi, F R; Junqueira, J L C

    2017-03-01

    The aim of this study was to compare the accuracy of linear measurements of the distance between the mandibular cortical bone and the mandibular canal using 64-detector multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT). It was sought to evaluate the reliability of these examinations in detecting the mandibular canal for use in bilateral sagittal split osteotomy (BSSO) planning. Eight dry human mandibles were studied. Three sites, corresponding to the lingula, the angle, and the body of the mandible, were selected. After the CT scans had been obtained, the mandibles were sectioned and the bone segments measured to obtain the actual measurements. On analysis, no statistically significant difference was found between the measurements obtained through MSCT and CBCT, or when comparing the measurements from these scans with the actual measurements. It is concluded that the images obtained by CT scan, both 64-detector multi-slice and cone beam, can be used to obtain accurate linear measurements to locate the mandibular canal for preoperative planning of BSSO. The ability to correctly locate the mandibular canal during BSSO will reduce the occurrence of neurosensory disturbances in the postoperative period. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. Target Price Accuracy

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  14. Sensitivity, specificity and predictive values of linear and nonlinear indices of heart rate variability in stable angina patients

    Pivatelli Flávio

    2012-10-01

    Full Text Available Abstract Background Decreased heart rate variability (HRV is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences] and frequency domains ultra-low frequency (ULF ≤ 0,003 Hz, very low frequency (VLF 0,003 – 0,04 Hz, low frequency (LF (0.04–0.15 Hz, and high frequency (HF (0.15–0.40 Hz as well as the ratio between LF and HF components (LF/HF. In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn, α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC. The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.

  15. Protecting effects specifically from low doses of ionizing radiation to mammalian cells challenge the concept of linearity

    Feinendegen, L.E.; Sondhaus, C.A.; Altman, K.I.

    1998-01-01

    This report examines the origin of tissue effects that may follow from different cellular responses to low-dose irradiation, using published data. Two principal categories of cellular responses are considered. One response category relates to the probability of radiation-induced DNA damage. The other category consists of low-dose induced changes in intracellular signaling that induce mechanisms of DNA damage control different from those operating at high levels of exposure. Modeled in this way, tissue is treated as a complex adaptive system. The interaction of the various cellular responses results in a net tissue dose-effect relation that is likely to deviate from linearity in the low-dose region. This suggests that the LNT hypothesis should be reexamined. The aim of this paper is to demonstrate that by use of microdosimetric concepts, the energy deposited in cell mass can be related to the occurrence of cellular responses, both damaging and defensive

  16. Protecting effects specifically from low doses of ionizing radiation to mammalian cells challenge the concept of linearity

    Feinendegen, L.E. [Brookhaven National Lab., Upton, NY (United States). Medical Dept.; Bond, V.P. [Washington State Univ., Richland, WA (United States); Sondhaus, C.A. [Univ. of Arizona, Tucson, AZ (United States). Dept. of Radiology and Radiation Control Office; Altman, K.I. [Univ. of Rochester Medical Center, NY (United States). Dept. of Biochemistry and Biophysics

    1998-12-31

    This report examines the origin of tissue effects that may follow from different cellular responses to low-dose irradiation, using published data. Two principal categories of cellular responses are considered. One response category relates to the probability of radiation-induced DNA damage. The other category consists of low-dose induced changes in intracellular signaling that induce mechanisms of DNA damage control different from those operating at high levels of exposure. Modeled in this way, tissue is treated as a complex adaptive system. The interaction of the various cellular responses results in a net tissue dose-effect relation that is likely to deviate from linearity in the low-dose region. This suggests that the LNT hypothesis should be reexamined. The aim of this paper is to demonstrate that by use of microdosimetric concepts, the energy deposited in cell mass can be related to the occurrence of cellular responses, both damaging and defensive.

  17. HPV genotype-specific concordance between EuroArray HPV, Anyplex II HPV28 and Linear Array HPV Genotyping test in Australian cervical samples.

    Cornall, Alyssa M; Poljak, Marin; Garland, Suzanne M; Phillips, Samuel; Machalek, Dorothy A; Tan, Jeffrey H; Quinn, Michael A; Tabrizi, Sepehr N

    2017-12-01

    To compare human papillomavirus genotype-specific performance of two genotyping assays, Anyplex II HPV28 (Seegene) and EuroArray HPV (EuroImmun), with Linear Array HPV (Roche). DNA extracted from clinican-collected cervical brush specimens in PreservCyt medium (Hologic), from 403 women undergoing management for detected cytological abnormalities, was tested on the three assays. Genotype-specific agreement were assessed by Cohen's kappa statistic and Fisher's z-test of significance between proportions. Agreement between Linear Array and the other 2 assays was substantial to almost perfect (κ = 0.60 - 1.00) for most genotypes, and was almost perfect (κ = 0.81 - 0.98) for almost all high-risk genotypes. Linear Array overall detected most genotypes more frequently, however this was only statistically significant for HPV51 (EuroArray; p = 0.0497), HPV52 (Anyplex II; p = 0.039) and HPV61 (Anyplex II; p=0.047). EuroArray detected signficantly more HPV26 (p = 0.002) and Anyplex II detected more HPV42 (p = 0.035) than Linear Array. Each assay performed differently for HPV68 detection: EuroArray and LA were in moderate to substantial agreement with Anyplex II (κ = 0.46 and 0.62, respectively), but were in poor disagreement with each other (κ = -0.01). EuroArray and Anyplex II had similar sensitivity to Linear Array for most high-risk genotypes, with slightly lower sensitivity for HPV 51 or 52. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Construction and accuracy assessment of patient-specific biocompatible drill template for cervical anterior transpedicular screw (ATPS insertion: an in vitro study.

    Maoqing Fu

    Full Text Available BACKGROUND: With the properties of three-column fixation and anterior-approach-only procedure, anterior transpedicular screw (ATPS is ideal for severe multilevel traumatic cervical instabilities. However, the accurate insertion of ATPS remains challenging. Here we constructed a patient-specific biocompatible drill template and evaluated its accuracy in assisting ATPS insertion. METHODS: After ethical approval, 24 formalin-preserved cervical vertebrae (C2-C7 were CT scanned. 3D reconstruction models of cervical vertebra were obtained with 2-mm-diameter virtual pin tracts at the central pedicles. The 3D models were used for rapid prototyping (RP printing. A 2-mm-diameter Kirschner wire was then inserted into the pin tract of the RP model before polymethylmethacrylate was used to construct the patient-specific biocompatible drill template. After removal of the anterior soft tissue, a 2-mm-diameter Kirschner wire was inserted into the cervical pedicle with the assistance of drill template. Cadaveric cervical spines with pin tracts were subsequently scanned using the same CT scanner. A 3D reconstruction was performed of the scanned spines to get 3D models of the vertebrae containing the actual pin tracts. The deviations were calculated between 3D models with virtual and actual pin tracts at the middle point of the cervical pedicle. 3D models of 3.5 mm-diameter screws were used in simulated insertion to grade the screw positions. FINDINGS: The patient-specific biocompatible drill template was constructed to assist ATPS insertion successfully. There were no significant differences between medial/lateral deviations (P = 0.797 or between superior/inferior deviations (P = 0.741. The absolute deviation values were 0.82±0.75 mm and 1.10±0.96 mm in axial and sagittal planes, respectively. In the simulated insertion, the screws in non-critical position were 44/48 (91.7%. CONCLUSIONS: The patient-specific drill template is biocompatible, easy

  19. Structure of human Rad51 protein filament from molecular modeling and site-specific linear dichroism spectroscopy

    Reymer, A.

    2009-07-08

    To get mechanistic insight into the DNA strand-exchange reaction of homologous recombination, we solved a filament structure of a human Rad51 protein, combining molecular modeling with experimental data. We build our structure on reported structures for central and N-terminal parts of pure (uncomplexed) Rad51 protein by aid of linear dichroism spectroscopy, providing angular orientations of substituted tyrosine residues of Rad51-dsDNA filaments in solution. The structure, validated by comparison with an electron microscopy density map and results from mutation analysis, is proposed to represent an active solution structure of the nucleo-protein complex. An inhomogeneously stretched double-stranded DNA fitted into the filament emphasizes the strategic positioning of 2 putative DNA-binding loops in a way that allows us speculate about their possibly distinct roles in nucleo-protein filament assembly and DNA strand-exchange reaction. The model suggests that the extension of a single-stranded DNA molecule upon binding of Rad51 is ensured by intercalation of Tyr-232 of the L1 loop, which might act as a docking tool, aligning protein monomers along the DNA strand upon filament assembly. Arg-235, also sitting on L1, is in the right position to make electrostatic contact with the phosphate backbone of the other DNA strand. The L2 loop position and its more ordered compact conformation makes us propose that this loop has another role, as a binding site for an incoming double-stranded DNA. Our filament structure and spectroscopic approach open the possibility of analyzing details along the multistep path of the strand-exchange reaction.

  20. MRM screening/biomarker discovery with linear ion trap MS: a library of human cancer-specific peptides

    Yang, Xu; Lazar, Iulia M

    2009-01-01

    The discovery of novel protein biomarkers is essential in the clinical setting to enable early disease diagnosis and increase survivability rates. To facilitate differential expression analysis and biomarker discovery, a variety of tandem mass spectrometry (MS/MS)-based protein profiling techniques have been developed. For achieving sensitive detection and accurate quantitation, targeted MS screening approaches, such as multiple reaction monitoring (MRM), have been implemented. MCF-7 breast cancer protein cellular extracts were analyzed by 2D-strong cation exchange (SCX)/reversed phase liquid chromatography (RPLC) separations interfaced to linear ion trap MS detection. MS data were interpreted with the Sequest-based Bioworks software (Thermo Electron). In-house developed Perl-scripts were used to calculate the spectral counts and the representative fragment ions for each peptide. In this work, we report on the generation of a library of 9,677 peptides (p < 0.001), representing ~1,572 proteins from human breast cancer cells, that can be used for MRM/MS-based biomarker screening studies. For each protein, the library provides the number and sequence of detectable peptides, the charge state, the spectral count, the molecular weight, the parameters that characterize the quality of the tandem mass spectrum (p-value, DeltaM, Xcorr, DeltaCn, Sp, no. of matching a, b, y ions in the spectrum), the retention time, and the top 10 most intense product ions that correspond to a given peptide. Only proteins identified by at least two spectral counts are listed. The experimental distribution of protein frequencies, as a function of molecular weight, closely matched the theoretical distribution of proteins in the human proteome, as provided in the SwissProt database. The amino acid sequence coverage of the identified proteins ranged from 0.04% to 98.3%. The highest-abundance proteins in the cellular extract had a molecular weight (MW)<50,000. Preliminary experiments have

  1. Experimental investigation of the accuracy for absolute quantification of brain creatine concentration using long time echo point resolved spectroscopy sequence with an external standard and linear combination of model spectra

    Lin Yan; Shen Zhiwei; Xiao Yeyu; Zheng Wenbin; Wu Renhua; Li Hui; Xiao Zhuanwei

    2008-01-01

    Objective: To investigate the accuracy for absolute quantification of brain creatine (Cr) concentration using long time echo (TE) point resolved spectroscopy (PRESS) sequence performed with an extemal standard and postprocessed with the linear combination of model spectra ( LCModel). Methods: Ten swine (3.1 ± 0.6 kg) and an external standard phantom containing detectable compounds of known concentration were investigated in this study by using 1.5 T GE Signa scanner and a standard head coil. The single-voxel proton magnetic resonance spectroscopy ( 1 H-MRS) data were acquired from the two ROIs (2 cm x 2 cm x 2 cm) placed in swine brain and external standard solution using PRESS sequence with TE 135 ms, TR 1500 ms, and 128 scan averages. The in vivo quantification of Cr was accomplished by LCModel. After 1 H-MRS examination, each animal was sacrificed immediately. In vitro Cr concentration was analyzed by high performance liquid chromatography (HPLC). Results: In the 1 H-MRS group, the Cr concentration was (9.37±0.14)mmol/kg. In the HPLC group, the Cr concentration was (8.91± 0.13)mmol/kg. Good agreement was obtained between these two methods (t=9.038, P=0.491). Conclusion: The long echo time PRESS sequence performed with an external standard and processed with LCModel is proven to be an accurate technique to detect the in vivo brain Cr concentration. (authors)

  2. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Site-specific integration of CAR gene into Jurkat T cells with a linear close-ended AAV-based DNA vector for CAR-T engineering.

    Zhang, Yun; Liu, Xiaomei; Zhang, Jinju; Zhang, Chun

    2016-09-01

    To develop a site-specific integration strategy for CAR-T engineering by using a non-viral vector dependent on adeno-associated viral (AAV) genome, which tends to be integrated into AAVS1 site with the help of its Rep proteins. AAV-dependent vectors were produced in Sf9 cells. Structural analyses revealed the vector as covalently close-ended, linear duplex molecules, which was termed "CELiD" DNA. A plasmid CMV-Rep was constructed to express the integrases Rep78 and Rep68. Jurkat cells were co-electroporated with "CELiD" DNA and plasmid CMV-Rep in order to specifically integrate CAR gene into AAVS1 site. We examined 71 stably transfected Jurkat clones by nested PCR, sequencing and southern blotting, of which 30 clones bore CAR gene within AAVS1 site. The site-specific integration efficiency was nearly 42.2 %. The AAV-dependent vector preferentially integrated CAR into AAVS1 site, which could be further used in human T cell modification and enhance the security of CAR-T therapy.

  4. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education

    Maria Åström

    2012-06-01

    Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation.

  5. Diagnostic accuracy of self-report and subjective history in the diagnosis of low back pain with non-specific lower extremity symptoms: A systematic review.

    Shultz, Susan; Averell, Kristina; Eickelman, Angela; Sanker, Holly; Donaldson, Megan Burrowbridge

    2015-02-01

    Subjective history questions/self-report items are commonly used to triage the patient with low back pain and related leg symptoms. However the value of the history taking process for decision-making to identify common classifications/diagnosis for patients presenting with low back related leg pain (LBRLP) have not been considered. The purpose of this study was to investigate the diagnostic accuracy of self-report items/history-taking questions used to identify patients with LBRLP. Eligible studies included: 1)subjects with low back pain AND related lower extremity pain, 2)details of subjective examination/self-report items, 3)cohort, prospective/longitudinal studies, and randomized control trials, 4)use of statistical reporting, 5)an acceptable reference standard. Quality was evaluated using the Quality Assessment of Diagnostic Accuracy Studies 2. A synthesis of history items that met the threshold for at least a small shift in the likelihood of the condition with a +LR ≥ 2 or -LR ≤ 0.5 were reported. Conditions commonly reported in the literature: lumbar spinal stenosis, lumbosacral nerve root compression/radiculopathy, disc herniation and neurophysiological low back pain ± leg pain. Eleven studies met the inclusion criteria. This is the first systematic review of diagnostic accuracy studies that examined only the history-taking items for their ability to identify LBRLP conditions. Clustering key items may provide a more precise clinical picture necessary to detect and treat a patient's presentation. History questions formed within the interview and their contributing value for decision-making remain understudied. There is a need for better designs to determine a more accurate diagnostic power to identify conditions with LBRLP. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Uniaxial Negative Thermal Expansion, Negative Linear Compressibility, and Negative Poisson's Ratio Induced by Specific Topology in Zn[Au(CN)2]2.

    Wang, Lei; Luo, Hubin; Deng, Shenghua; Sun, Ying; Wang, Cong

    2017-12-18

    The well-known idea of "structure determines properties" can be understood profoundly in the case of hexagonal zinc dicyanometalate. Using density functional theory (DFT) calculations, we show the uniaxial negative thermal expansion (NTE) and negative linear compressibility (NLC) properties of Zn[Au(CN) 2 ] 2 . The temperature dependence of phonon frequencies within the quasi-harmonic approximation (QHA) is investigated. The abnormal phonon hardening (frequency increase on heating) is detected in the ranges of 0-225, 320-345, and 410-430 cm -1 , which can be indicative of the unusual physical properties of Zn[Au(CN) 2 ] 2 . Due to the significance of low-energy phonon frequencies in Zn[Au(CN) 2 ] 2 , in this work, the corresponding vibrational mode of the lowest-frequency optical phonon at the zone center is analyzed. The specific topology of a springlike framework that will produce the effects of a compressed spring on heating and an extended spring under hydrostatic pressure is identified and leads to the coexistence of uniaxial-NTE and NLC behaviors in Zn[Au(CN) 2 ] 2 . The distinguishing phonon group velocity along the a axis and c axis facilitates different responses for both the axes under temperature and hydrostatic pressure field. Through an analysis and visualization of the spatial dependence of elastic tensors, it is found that a negative Poisson's ratio (NPR) is presented in all projection planes due to the specific topology.

  7. A generic approach for expanding homolog-targeted residue screening of sulfonamides using a fast matrix separation and class-specific fragmentation-dependent acquisition with a hybrid quadrupole-linear ion trap mass spectrometer

    Huang Chunlin; Guo Bin; Wang Xiaoying; Li Jie; Zhu Weitao; Chen Bo; Ouyang Shan; Yao Shouzhuo

    2012-01-01

    Highlights: ► Generic homolog-targeted screening approach for multi-residual sulfonamide analogs. ► Single-tube extraction/partitioning-multifunction adsorption cleanup for direct injection. ► Class-specific fragmentation for expanding coverage of N 4 -acetyl and N-OH metabolites. ► PreS–IDA–EPI in LC–QqLIT for simultaneous screening and confirmation of real samples. - Abstract: A generic and efficient homolog-targeted approach was used to expand screening and detection of target class of sulfonamides and structural analogs, based on a fast single-tube extraction/partitioning-multifunction adsorption cleanup (SEP/MAC) for class-specific fragmentation-dependent acquisition with a liquid chromatography–hybrid triple-quadrupole linear ion trap mass spectrometer (LC–QqLIT). By combining the two-stage process conducted in a single tube as one-pot protocol, the straightforward SEP/MAC procedure was optimized to offer clean extracts with reasonable recovery (71–109% with RSDs 4 -acetyl and hydroxylamine metabolites plus their possible dimers. Moreover, the PreS-triggered automatically enhanced product ion spectral acquisition enabled simultaneous screening, profiling and confirmation of an unlimited number of analytes belonging to the sulfonamide class within a single analysis. The validation and application results of the generic SEP/MAC-based LC–QqLIT strategy consistently demonstrated favorable performances with acceptable accuracy (67–116%), precision (RSDs −1 ) to meet the acceptance criteria for all the sulfonamide–tissue combinations. Thus, the integration of the matrix-independent SEP/MAC procedure and the multiparameter matching algorithm with the unit-resolution LC–QqLIT instrument can serve as a valuable semi-targeted discovery strategy for rapid screening and reliable quantitative/confirmatory analysis of real samples.

  8. Improving shuffler assay accuracy

    Rinard, P.M.

    1995-01-01

    Drums of uranium waste should be disposed of in an economical and environmentally sound manner. The most accurate possible assays of the uranium masses in the drums are required for proper disposal. The accuracies of assays from a shuffler are affected by the type of matrix material in the drums. Non-hydrogenous matrices have little effect on neutron transport and accuracies are very good. If self-shielding is known to be a minor problem, good accuracies are also obtained with hydrogenous matrices when a polyethylene sleeve is placed around the drums. But for those cases where self-shielding may be a problem, matrices are hydrogenous, and uranium distributions are non-uniform throughout the drums, the accuracies are degraded. They can be greatly improved by determining the distributions of the uranium and then applying correction factors based on the distributions. This paper describes a technique for determining uranium distributions by using the neutron count rates in detector banks around the waste drum and solving a set of overdetermined linear equations. Other approaches were studied to determine the distributions and are described briefly. Implementation of this correction is anticipated on an existing shuffler next year

  9. linear-quadratic-linear model

    Tanwiwat Jaikuna

    2017-02-01

    Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.

  10. Linear algebra

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  11. Enhancement of Human Antigen-Specific Memory T-Cell Responses by Interleukin-7 May Improve Accuracy in Diagnosing Tuberculosis▿ †

    Feske, Marsha; Nudelman, Rodolfo J.; Medina, Miguel; Lew, Justin; Singh, Manisha; Couturier, Jacob; Graviss, Edward A.; Lewis, Dorothy E.

    2008-01-01

    Children and immunocompromised adults are at an increased risk of tuberculosis (TB), but diagnosis is more challenging. Recently developed gamma interferon (IFN-γ) release assays provide increased sensitivity and specificity for diagnosis of latent TB, but their use is not FDA approved in immunocompromised or pediatric populations. Both populations have reduced numbers of T cells, which are major producers of IFN-γ. Interleukin 7 (IL-7), a survival cytokine, stabilizes IFN-γ message and increases protein production. IL-7 was added to antigen-stimulated lymphocytes to improve IFN-γ responses as measured by enzyme-linked immunosorbent assay (ELISA) and enzyme-linked immunospot (ELISPOT) assay. Antigens used were tetanus toxoid (n = 10), p24 (from human immunodeficiency virus [HIV], n = 9), and TB peptides (n = 15). Keyhole limpet hemocyanin was used as a negative control, and phytohemagglutinin was the positive control. IL-7 improved antigen-specific responses to all antigens tested including tetanus toxoid, HIV type 1 p24, and TB peptides (ESAT-6 and CFP-10) with up to a 14-fold increase (mean = 3.8), as measured by ELISA. Increased IFN-γ responses from controls, HIV-positive patients, and TB patients were statistically significant, with P values of <0.05, 0.01, and 0.05, respectively. ELISPOT assay results confirmed ELISA findings (P values of <0.01, 0.02, and 0.03, respectively), with a strong correlation between the two tests (R2 = 0.82 to 0.99). Based on average background levels, IL-7 increased detection of IFN-γ by 39% compared to the level with antigen alone. Increased production of IFN-γ induced by IL-7 improves sensitivity of ELISA and ELISPOT assays for all antigens tested. Further enhancement of IFN-γ-based assays might improve TB diagnosis in those populations at highest risk for TB. PMID:18753334

  12. The Use of Linear Programming for Prediction.

    Schnittjer, Carl J.

    The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

  13. A generic approach for expanding homolog-targeted residue screening of sulfonamides using a fast matrix separation and class-specific fragmentation-dependent acquisition with a hybrid quadrupole-linear ion trap mass spectrometer

    Huang Chunlin [Department of Biochemistry and Molecular Biology, School of Pharmacy and Life Science, University of South China, Hengyang 421001 (China); Guo Bin, E-mail: binnguo@126.com [Key Laboratory of Chemical Biology and Traditional Chinese Medicine Research (Ministry of Education of China), Hunan Normal University, Changsha 410081 (China); Wang Xiaoying [Key Laboratory of Chemical Biology and Traditional Chinese Medicine Research (Ministry of Education of China), Hunan Normal University, Changsha 410081 (China); Li Jie [Department of Biochemistry and Molecular Biology, School of Pharmacy and Life Science, University of South China, Hengyang 421001 (China); Zhu Weitao; Chen Bo [Key Laboratory of Chemical Biology and Traditional Chinese Medicine Research (Ministry of Education of China), Hunan Normal University, Changsha 410081 (China); Ouyang Shan [Food Inspection and Quarantine Center, Shenzhen Entry-Exit Inspection and Quarantine Bureau of the People' s Republic of China, Shenzhen 518067 (China); Yao Shouzhuo [Key Laboratory of Chemical Biology and Traditional Chinese Medicine Research (Ministry of Education of China), Hunan Normal University, Changsha 410081 (China)

    2012-08-06

    Highlights: Black-Right-Pointing-Pointer Generic homolog-targeted screening approach for multi-residual sulfonamide analogs. Black-Right-Pointing-Pointer Single-tube extraction/partitioning-multifunction adsorption cleanup for direct injection. Black-Right-Pointing-Pointer Class-specific fragmentation for expanding coverage of N{sup 4}-acetyl and N-OH metabolites. Black-Right-Pointing-Pointer PreS-IDA-EPI in LC-QqLIT for simultaneous screening and confirmation of real samples. - Abstract: A generic and efficient homolog-targeted approach was used to expand screening and detection of target class of sulfonamides and structural analogs, based on a fast single-tube extraction/partitioning-multifunction adsorption cleanup (SEP/MAC) for class-specific fragmentation-dependent acquisition with a liquid chromatography-hybrid triple-quadrupole linear ion trap mass spectrometer (LC-QqLIT). By combining the two-stage process conducted in a single tube as one-pot protocol, the straightforward SEP/MAC procedure was optimized to offer clean extracts with reasonable recovery (71-109% with RSDs < 20%) and decreased matrix interferences (-9 to 19%) of multiresidual sulfonamide extraction from different tissue samples. The novel use of neutral loss scan of 66 Da (NLS) or precursor ion scanning of m/z 108 (PreS) in positive ion mode was found to achieve more comprehensive coverage of protonated molecular ions of a wide array of sulfonamides including N{sup 4}-acetyl and hydroxylamine metabolites plus their possible dimers. Moreover, the PreS-triggered automatically enhanced product ion spectral acquisition enabled simultaneous screening, profiling and confirmation of an unlimited number of analytes belonging to the sulfonamide class within a single analysis. The validation and application results of the generic SEP/MAC-based LC-QqLIT strategy consistently demonstrated favorable performances with acceptable accuracy (67-116%), precision (RSDs < 25%), and sensitivity (LOQs {<=} 7.5 ng

  14. Measurement of serum isoform [-2]proPSA derivatives shows superior accuracy to magnetic resonance imaging in the diagnosis of prostate cancer in patients with a total prostate-specific antigen level of 2-10 ng/ml.

    Furuya, Kazuhiro; Kawahara, Takashi; Narahara, Masaki; Tokita, Takashi; Fukui, Sachi; Imano, Masashi; Mitome, Taku; Ito, Yusuke; Izumi, Koji; Osaka, Kimito; Yokomizo, Yumiko; Hayashi, Narihiko; Hasumi, Hisashi; Nawata, Shintaro; Kawano, Tsuyoshi; Yao, Masahiro; Uemura, Hiroji

    2017-08-01

    More accurate diagnostic procedures for prostate cancer are needed to avoid unnecessary biopsy due to the low specificity of prostate-specific antigen (PSA). Recent studies showed that the percentage of serum isoform [-2]proPSA (p2PSA) to free PSA (%p2PSA), the Prostate Health Index (PHI) and magnetic resonance imaging (MRI) were more accurate than PSA. The aim of this study was to test the accuracy of %p2PSA, PHI and MRI in discriminating patients with and without prostate cancer. The subjects were 50 consecutive men with a PSA level of 2.0-10.0 ng/ml, who underwent prostate biopsy from October 2012 to July 2014. These patients underwent multiparametric MRI before biopsy, and their serum samples were measured for PSA, free PSA and p2PSA. The sensitivity, specificity and accuracy of PHI, %p2PSA and MRI were compared with PSA in the diagnosis of biopsy-confirmed prostate cancer. In a univariate analysis, %p2PSA [area under the curve (AUC): 0.811] and PHI (AUC 0.795) were more accurate than MRI (AUC: 0.583) and PSA (AUC: 0.554) for prostate cancer detection. At 60% sensitivity, the specificity of PHI (76.5%) was higher than that of MRI (52.9%). For significant cancer detection, %p2PSA (AUC: 0.745), PHI (AUC: 0.791) and MRI (AUC: 0.739) were marginally more accurate than PSA (AUC: 0.696). At 85% sensitivity, the specificity of MRI (62.1%) was higher than that of PHI (34.5%). PHI and %p2PSA can be used for screening the general population and MRI can be used for detection of significant cancer in patients suspected, from screening tests, of having prostate cancer.

  15. Specific effect of the linear charge density of the acid polysaccharide on thermal aggregation/ disaggregation processes in complex carrageenan/lysozyme systems

    Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.

    2017-01-01

    We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex

  16. Linear gate

    Suwono.

    1978-01-01

    A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)

  17. Linear Accelerators

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics

  18. Diagnostic Accuracy of 64Copper Prostate-specific Membrane Antigen Positron Emission Tomography/Computed Tomography for Primary Lymph Node Staging of Intermediate- to High-risk Prostate Cancer: Our Preliminary Experience.

    Cantiello, Francesco; Gangemi, Vincenzo; Cascini, Giuseppe Lucio; Calabria, Ferdinando; Moschini, Marco; Ferro, Matteo; Musi, Gennaro; Butticè, Salvatore; Salonia, Andrea; Briganti, Alberto; Damiano, Rocco

    2017-08-01

    To assess the diagnostic accuracy of 64 Copper prostate-specific membrane antigen ( 64 Cu-PSMA) positron emission tomography/computed tomography (PET/CT) in the primary lymph node (LN) staging of a selected cohort of intermediate- to high-risk prostate cancer (PCa) patients. An observational prospective study was performed in 23 patients with intermediate- to high-risk PCa, who underwent 64 Cu-PSMA PET/CT for local and lymph nodal staging before laparoscopic radical prostatectomy with an extended pelvic LN dissection. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for LN status of 64 Cu-PSMA PET/CT were calculated using the final pathological findings as reference. Furthermore, we evaluated the correlation of intraprostatic tumor extent and grading with 64 Cu-PSMA intraprostatic distribution. Pathological analysis of LN involvement in 413 LNs harvested from our study cohort identified a total of 22 LN metastases in 8 (5%) of the 23 (35%) PCa patients. Imaging-based LN staging in a per-patient analysis showed that 64 Cu-PSMA PET/CT was positive in 7 of 8 LN-positive patients (22%) with a sensitivity of 87.5%, specificity of 100%, PPV of 100%, and NPV of 93.7%, considering the maximum standardized uptake value (SUV max ) at 4 hours as our reference. Receiver operating characteristic curve was characterized by an area under the curve of 0.938. A significant positive association was observed between SUV max at 4 hours with Gleason score, index, and cumulative tumor volume. In our intermediate- to high-risk PCa patients study cohort, we showed the high diagnostic accuracy of 64 Cu-PSMA PET/CT for primary LN staging before radical prostatectomy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Linearization Method and Linear Complexity

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  20. Linear algebra

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  1. Linear sign in cystic brain lesions ≥5 mm: A suggestive feature of perivascular space.

    Sung, Jinkyeong; Jang, Jinhee; Choi, Hyun Seok; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-Soo

    2017-11-01

    To determine the prevalence of a linear sign within enlarged perivascular space (EPVS) and chronic lacunar infarction (CLI) ≥ 5 mm on T2-weighted imaging (T2WI) and time-of-flight (TOF) magnetic resonance angiography (MRA), and to evaluate the diagnostic value of the linear signs for EPVS over CLI. This study included 101 patients with cystic lesions ≥ 5 mm on brain MRI including TOF MRA. After classification of cystic lesions into EPVS or CLI, two readers assessed linear signs on T2WI and TOF MRA. We compared the prevalence and the diagnostic performance of linear signs. Among 46 EPVS and 51 CLI, 84 lesions (86.6%) were in basal ganglia. The prevalence of T2 and TOF linear signs was significantly higher in the EPVS than in the CLI (P linear signs showed high sensitivity (> 80%). TOF linear sign showed significantly higher specificity (100%) and accuracy (92.8% and 90.7%) than T2 linear sign (P linear signs were more frequently observed in EPVS than CLI. They showed high sensitivity in differentiation of them, especially for basal ganglia. TOF sign showed higher specificity and accuracy than T2 sign. • Linear sign is a suggestive feature of EPVS. • Time-of-flight magnetic resonance angiography can reveal the lenticulostriate artery within perivascular spaces. • Linear sign helps differentiation of EPVS and CLI, especially in basal ganglia.

  2. Accuracies Of Optical Processors For Adaptive Optics

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  3. Linear programming

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  4. Linear algebra

    Liesen, Jörg

    2015-01-01

    This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...

  5. Linear algebra

    Berberian, Sterling K

    2014-01-01

    Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.

  6. Linear Models

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  7. LINEAR ACCELERATOR

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  8. ACCURACY IN FOOTBALL: SCORING A GOAL AS THE ULTIMATE OBJECTIVE OF FOOTBALL GAME

    Dejan Milenković

    2013-12-01

    Full Text Available The study included 60 young football players aged 10 and 11 years with the aim to examine the influence of motor skills on a specific accuracy in football. The following tests for assessment of motoric abilities were used: Coordination: jumping over the horizontal rope, envelope test, figure „8“ with bending; Flexibility: forward bend-standing upper-body rotation-touch, the splits, side-stepping with a baton; Balance: standing on one leg along the balance bench, standing on one leg with eyes closed, flamingo test. Tests for specific accuracy in football included: elevational accuracy by foot - vertical target, elevational accuracy by foot - horizontal target, linear accuracy by foot - vertical target, the hits of ball by foot on the wall after it rebounces from the surface, elevational accuracy by head - vertical target and elevational accuracy by head - horizontal target. Results obtained by processing the data applying canonical correlation and regression analysis indicated the indisputable influence of motor abilities of young football players on the majority of specific accuracy tests.

  9. Research of Face Recognition with Fisher Linear Discriminant

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  10. Patient-specific 3D models created by 3D imaging system or bi-planar imaging coupled with Moiré-Fringe projections: a comparative study of accuracy and reliability on spinal curvatures and vertebral rotation data.

    Hocquelet, Arnaud; Cornelis, François; Jirot, Anna; Castaings, Laurent; de Sèze, Mathieu; Hauger, Olivier

    2016-10-01

    The aim of this study is to compare the accuracy and reliability of spinal curvatures and vertebral rotation data based on patient-specific 3D models created by 3D imaging system or by bi-planar imaging coupled with Moiré-Fringe projections. Sixty-two consecutive patients from a single institution were prospectively included. For each patient, frontal and sagittal calibrated low-dose bi-planar X-rays were performed and coupled simultaneously with an optical Moiré back surface-based technology. The 3D reconstructions of spine and pelvis were performed independently by one radiologist and one technician in radiology using two different semi-automatic methods using 3D radio-imaging system (method 1) or bi-planar imaging coupled with Moiré projections (method 2). Both methods were compared using Bland-Altman analysis, and reliability using intraclass correlation coefficient (ICC). ICC showed good to very good agreement. Between the two techniques, the maximum 95 % prediction limits was -4.9° degrees for the measurements of spinal coronal curves and less than 5° for other parameters. Inter-rater reliability was excellent for all parameters across both methods, except for axial rotation with method 2 for which ICC was fair. Method 1 was faster for reconstruction time than method 2 for both readers (13.4 vs. 20.7 min and 10.6 vs. 13.9 min; p = 0.0001). While a lower accuracy was observed for the evaluation of the axial rotation, bi-planar imaging coupled with Moiré-Fringe projections may be an accurate and reliable tool to perform 3D reconstructions of the spine and pelvis.

  11. Linear regression

    Olive, David J

    2017-01-01

    This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...

  12. Linear Colliders

    Alcaraz, J.

    2001-01-01

    After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs

  13. Linear algebra

    Edwards, Harold M

    1995-01-01

    In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject

  14. SU-F-T-284: The Effect of Linear Accelerator Output Variation On the Quality of Patient Specific Rapid Arc Verification Plans

    Sandhu, G; Cao, F; Szpala, S; Kohli, K [Fraser Valley Cancer Centre, BC Cancer Agency, Surrey, British Columbia (Canada)

    2016-06-15

    Purpose: The aim of the current study is to investigate the effect of machine output variation on the delivery of the RapidArc verification plans. Methods: Three verification plans were generated using Eclipse™ treatment planning system (V11.031) with plan normalization value 100.0%. These plans were delivered on the linear accelerators using ArcCHECK− device, with machine output 1.000 cGy/MU at calibration point. These planned and delivered dose distributions were used as reference plans. Additional plans were created in Eclipse− with normalization values ranging 92.80%–102% to mimic the machine output ranging 1.072cGy/MU-0.980cGy/MU, at the calibration point. These plans were compared against the reference plans using gamma indices (3%, 3mm) and (2%, 2mm). Calculated gammas were studied for its dependence on machine output. Plans were considered passed if 90% of the points satisfy the defined gamma criteria. Results: The gamma index (3%, 3mm) was insensitive to output fluctuation within the output tolerance level (2% of calibration), and showed failures, when the machine output exceeds ≥3%. Gamma (2%, 2mm) was found to be more sensitive to the output variation compared to the gamma (3%, 3mm), and showed failures, when output exceeds ≥1.7%. The variation of the gamma indices with output variability also showed dependence upon the plan parameters (e.g. MLC movement and gantry rotation). The variation of the percentage points passing gamma criteria with output variation followed a non-linear decrease beyond the output tolerance level. Conclusion: Data from the limited plans and output conditions showed that gamma (2%, 2mm) is more sensitive to the output fluctuations compared to Gamma (3%,3mm). Work under progress, including detail data from a large number of plans and a wide range of output conditions, may be able to conclude the quantitative dependence of gammas on machine output, and hence the effect on the quality of delivered rapid arc plans.

  15. Linear sign in cystic brain lesions ≥5 mm. A suggestive feature of perivascular space

    Sung, Jinkyeong [The Catholic University of Korea, Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, Seoul (Korea, Republic of); The Catholic University of Korea, Department of Radiology, St. Vincent' s Hospital, College of Medicine, Seoul (Korea, Republic of); Jang, Jinhee; Choi, Hyun Seok; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo [The Catholic University of Korea, Department of Radiology, Seoul St. Mary' s Hospital, College of Medicine, Seoul (Korea, Republic of)

    2017-11-15

    To determine the prevalence of a linear sign within enlarged perivascular space (EPVS) and chronic lacunar infarction (CLI) ≥ 5 mm on T2-weighted imaging (T2WI) and time-of-flight (TOF) magnetic resonance angiography (MRA), and to evaluate the diagnostic value of the linear signs for EPVS over CLI. This study included 101 patients with cystic lesions ≥ 5 mm on brain MRI including TOF MRA. After classification of cystic lesions into EPVS or CLI, two readers assessed linear signs on T2WI and TOF MRA. We compared the prevalence and the diagnostic performance of linear signs. Among 46 EPVS and 51 CLI, 84 lesions (86.6%) were in basal ganglia. The prevalence of T2 and TOF linear signs was significantly higher in the EPVS than in the CLI (P <.001). For the diagnosis of EPVS, T2 and TOF linear signs showed high sensitivity (> 80%). TOF linear sign showed significantly higher specificity (100%) and accuracy (92.8% and 90.7%) than T2 linear sign (P <.001). T2 and TOF linear signs were more frequently observed in EPVS than CLI. They showed high sensitivity in differentiation of them, especially for basal ganglia. TOF sign showed higher specificity and accuracy than T2 sign. (orig.)

  16. Linear programming

    Karloff, Howard

    1991-01-01

    To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...

  17. Specific features of design and electric circuit of the pulse modulator of the amplification channel of second-section linear accelerator for a meson factory

    Dyukov, L.V.; Fedorov, A.F.; Terekhov, V.F.

    1977-01-01

    The circuit and design of a modulator at pulse power of 15 MW and average power of 300 kW, intended for supplying a klystron generator, are described. The modulator is designed for operating in a lengthy mode. The capacitance storage-former is made in the form of three parallel sections of artificial shaping lines in order to lower the specific loads on the elements of the shaping line and thyristors. The thyristor spark gaps are made in the form of a unified water-cooled assembly composed of 12 series TD-320 thyristors. An assessment of the modulator reliability has shown that the average time-to-fault is 800 hr. Modulator tests confirm that the choice of the circuit solution is correct. The modulator total efficiency equals to approximately 0.9

  18. Cross-Language Transfer of Word Reading Accuracy and Word Reading Fluency in Spanish-English and Chinese-English Bilinguals: Script-Universal and Script-Specific Processes

    Pasquarella, Adrian; Chen, Xi; Gottardo, Alexandra; Geva, Esther

    2015-01-01

    This study examined cross-language transfer of word reading accuracy and word reading fluency in Spanish-English and Chinese-English bilinguals. Participants included 51 Spanish-English and 64 Chinese-English bilinguals. Both groups of children completed parallel measures of phonological awareness, rapid automatized naming, word reading accuracy,…

  19. Linear ubiquitination in immunity.

    Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning

    2015-07-01

    Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.

  20. An Automated Micro-Total Immunoassay System for Measuring Cancer-Associated α2,3-linked Sialyl N-Glycan-Carrying Prostate-Specific Antigen May Improve the Accuracy of Prostate Cancer Diagnosis

    Tomokazu Ishikawa

    2017-02-01

    Full Text Available The low specificity of the prostate-specific antigen (PSA for early detection of prostate cancer (PCa is a major issue worldwide. The aim of this study to examine whether the serum PCa-associated α2,3-linked sialyl N-glycan-carrying PSA (S2,3PSA ratio measured by automated micro-total immunoassay systems (μTAS system can be applied as a diagnostic marker of PCa. The μTAS system can utilize affinity-based separation involving noncovalent interaction between the immunocomplex of S2,3PSA and Maackia amurensis lectin to simultaneously determine concentrations of free PSA and S2,3PSA. To validate quantitative performance, both recombinant S2,3PSA and benign-associated α2,6-linked sialyl N-glycan-carrying PSA (S2,6PSA purified from culture supernatant of PSA cDNA transiently-transfected Chinese hamster ovary (CHO-K1 cells were used as standard protein. Between 2007 and 2016, fifty patients with biopsy-proven PCa were pair-matched for age and PSA levels, with the same number of benign prostatic hyperplasia (BPH patients used to validate the diagnostic performance of serum S2,3PSA ratio. A recombinant S2,3PSA- and S2,6PSA-spiked sample was clearly discriminated by μTAS system. Limit of detection of S2,3PSA was 0.05 ng/mL and coefficient variation was less than 3.1%. The area under the curve (AUC for detection of PCa for the S2,3PSA ratio (%S2,3PSA with cutoff value 43.85% (AUC; 0.8340 was much superior to total PSA (AUC; 0.5062 using validation sample set. Although the present results are preliminary, the newly developed μTAS platform for measuring %S2,3PSA can achieve the required assay performance specifications for use in the practical and clinical setting and may improve the accuracy of PCa diagnosis. Additional validation studies are warranted.

  1. Reduction of Linear Programming to Linear Approximation

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  2. Linear programming using Matlab

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  3. Accuracy assessment of an industrial actuator

    Dalla Costa, Giuseppe; Genta, Gianfranco; Barbato, Giulio

    2016-01-01

    A commercial linear actuator equipped with a 0.1 μm resolution encoder was used as a contact displacement sensor with adjustable force. The accuracy of the position reading of the actuator was evaluated from experimental data taking into account the uncertainty contributions. The tests consisted ...

  4. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  5. Cadastral Database Positional Accuracy Improvement

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  6. Diagnosing Eyewitness Accuracy

    Russ, Andrew

    2015-01-01

    Eyewitnesses frequently mistake innocent people for the perpetrator of an observed crime. Such misidentifications have led to the wrongful convictions of many people. Despite this, no reliable method yet exists to determine eyewitness accuracy. This thesis explored two new experimental methods for this purpose. Chapter 2 investigated whether repetition priming can measure prior exposure to a target and compared this with observers’ explicit eyewitness accuracy. Across three experiments slower...

  7. Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000

    Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.

    2018-04-01

    The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.

  8. A Methodology to Assess the Accuracy with which Remote Data Characterize a Specific Surface, as a Function of Full Width at Half Maximum (FWHM: Application to Three Italian Coastal Waters

    Rosa Maria Cavalli

    2014-01-01

    Full Text Available This methodology assesses the accuracy with which remote data characterizes a surface, as a function of Full Width at Half Maximum (FWHM. The purpose is to identify the best remote data that improves the characterization of a surface, evaluating the number of bands in the spectral range. The first step creates an accurate dataset of remote simulated data, using in situ hyperspectral reflectances. The second step evaluates the capability of remote simulated data to characterize this surface. The spectral similarity measurements, which are obtained using classifiers, provide this capability. The third step examines the precision of this capability. The assumption is that in situ hyperspectral reflectances are considered the “real” reflectances. They are resized with the same spectral range of the remote data. The spectral similarity measurements which are obtained from “real” resized reflectances, are considered “real” measurements. Therefore, the quantity and magnitude of “errors” (i.e., differences between spectral similarity measurements obtained from “real” resized reflectances and from remote data provide the accuracy as a function of FWHM. This methodology was applied to evaluate the accuracy with which CHRIS-mode1, CHRIS-mode2, Landsat5-TM, MIVIS and PRISMA data characterize three coastal waters. Their mean values of uncertainty are 1.59%, 3.79%, 7.75%, 3.15% and 1.18%, respectively.

  9. Accuracy in Optical Information Processing

    Timucin, Dogan Aslan

    Low computational accuracy is an important obstacle for optical processors which blocks their way to becoming a practical reality and a serious challenger for classical computing paradigms. This research presents a comprehensive solution approach to the problem of accuracy enhancement in discrete analog optical information processing systems. Statistical analysis of a generic three-plane optical processor is carried out first, taking into account the effects of diffraction, interchannel crosstalk, and background radiation. Noise sources included in the analysis are photon, excitation, and emission fluctuations in the source array, transmission and polarization fluctuations in the modulator, and photoelectron, gain, dark, shot, and thermal noise in the detector array. Means and mutual coherence and probability density functions are derived for both optical and electrical output signals. Next, statistical models for a number of popular optoelectronic devices are studied. Specific devices considered here are light-emitting and laser diode sources, an ideal noiseless modulator and a Gaussian random-amplitude-transmittance modulator, p-i-n and avalanche photodiode detectors followed by electronic postprocessing, and ideal free-space geometrical -optics propagation and single-lens imaging systems. Output signal statistics are determined for various interesting device combinations by inserting these models into the general formalism. Finally, based on these special-case output statistics, results on accuracy limitations and enhancement in optical processors are presented. Here, starting with the formulation of the accuracy enhancement problem as (1) an optimal detection problem and (2) as a parameter estimation problem, the potential accuracy improvements achievable via the classical multiple-hypothesis -testing and maximum likelihood and Bayesian parameter estimation methods are demonstrated. Merits of using proper normalizing transforms which can potentially stabilize

  10. Electrodynamic linear motor

    Munehiro, H

    1980-05-29

    When driving the carriage of a printer through a rotating motor, there are problems regarding the limited accuracy of the carriage position due to rotation or contraction and ageing of the cable. In order to solve the problem, a direct drive system was proposed, in which the printer carriage is driven by a linear motor. If one wants to keep the motor circuit of such a motor compact, then the magnetic flux density in the air gap must be reduced or the motor travel must be reduced. It is the purpose of this invention to create an electrodynamic linear motor, which on the one hand is compact and light and on the other hand has a relatively high constant force over a large travel. The invention is characterised by the fact that magnetic fields of alternating polarity are generated at equal intervals in the magnetic field, and that the coil arrangement has 2 adjacent coils, whose size corresponds to half the length of each magnetic pole. A logic circuit is provided to select one of the two coils and to determine the direction of the current depending on the signals of a magnetic field sensor on the coil arrangement.

  11. Linear Algebra and Smarandache Linear Algebra

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  12. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  13. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    Dongxu Ren

    2016-04-01

    Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.

  14. Overlay accuracy fundamentals

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  15. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  16. Linearly Refined Session Types

    Pedro Baltazar

    2012-11-01

    Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.

  17. Linear Temporal Logic-based Mission Planning

    Anil Kumar; Rahul Kala

    2016-01-01

    In this paper, we describe the Linear Temporal Logic-based reactive motion planning. We address the problem of motion planning for mobile robots, wherein the goal specification of planning is given in complex environments. The desired task specification may consist of complex behaviors of the robot, including specifications for environment constraints, need of task optimality, obstacle avoidance, rescue specifications, surveillance specifications, safety specifications, etc. We use Linear Tem...

  18. Application of Nearly Linear Solvers to Electric Power System Computation

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  19. Multiple sequence alignment accuracy and phylogenetic inference.

    Ogden, T Heath; Rosenberg, Michael S

    2006-04-01

    Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.

  20. Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process

    Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas

    2018-05-01

    This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.

  1. Parametric Linear Dynamic Logic

    Peter Faymonville

    2014-08-01

    Full Text Available We introduce Parametric Linear Dynamic Logic (PLDL, which extends Linear Dynamic Logic (LDL by temporal operators equipped with parameters that bound their scope. LDL was proposed as an extension of Linear Temporal Logic (LTL that is able to express all ω-regular specifications while still maintaining many of LTL's desirable properties like an intuitive syntax and a translation into non-deterministic Büchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all ω-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic Büchi word automata of exponential size via alternating automata. This yields a PSPACE model checking algorithm and a realizability algorithm with doubly-exponential running time. Furthermore, we give tight upper and lower bounds on optimal parameter values for both problems. These results show that PLDL model checking and realizability are not harder than LTL model checking and realizability.

  2. Can Automatic Classification Help to Increase Accuracy in Data Collection?

    Frederique Lang

    2016-09-01

    Full Text Available Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets. Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM, Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms. Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%, which may reduce the manual work needed for classification tasks. Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers. Practical implications: Although the

  3. Geoid undulation accuracy

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  4. Linearly constrained minimax optimization

    Madsen, Kaj; Schjær-Jacobsen, Hans

    1978-01-01

    We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...

  5. Stability of Linear Equations--Algebraic Approach

    Cherif, Chokri; Goldstein, Avraham; Prado, Lucio M. G.

    2012-01-01

    This article could be of interest to teachers of applied mathematics as well as to people who are interested in applications of linear algebra. We give a comprehensive study of linear systems from an application point of view. Specifically, we give an overview of linear systems and problems that can occur with the computed solution when the…

  6. Bayes linear statistics, theory & methods

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  7. Accuracy and reliability of coronal and sagittal spinal curvature data based on patient-specific three-dimensional models created by the EOS 2D/3D imaging system.

    Somoskeöy, Szabolcs; Tunyogi-Csapó, Miklós; Bogyó, Csaba; Illés, Tamás

    2012-11-01

    Three-dimensional (3D) deformations of the spine are predominantly characterized by two-dimensional (2D) angulation measurements in coronal and sagittal planes, using anteroposterior and lateral X-ray images. For coronal curves, a method originally described by Cobb and for sagittal curves a modified Cobb method are most widely used in practice, and these methods have been shown to exhibit good-to-excellent reliability and reproducibility, carried out either manually or by computer-based tools. Recently, an ultralow radiation dose-integrated radioimaging solution was introduced with special software for realistic 3D visualization and parametric characterization of the spinal column. Comparison of accuracy, correlation of measurement values, intraobserver and interrater reliability of methods by conventional manual 2D and sterEOS 3D measurements in a routine clinical setting. Retrospective nonrandomized study of diagnostic X-ray images created as part of a routine clinical protocol of eligible patients examined at our clinic during a 30-month period between July 2007 and December 2009. In total, 201 individuals (170 females, 31 males; mean age, 19.88 years) including 10 healthy athletes with normal spine and patients with adolescent idiopathic scoliosis (175 cases), adult degenerative scoliosis (11 cases), and Scheuermann hyperkyphosis (5 cases). Overall range of coronal curves was between 2.4° and 117.5°. Analysis of accuracy and reliability of measurements were carried out on a group of all patients and in subgroups based on coronal plane deviation: 0° to 10° (Group 1, n=36), 10° to 25° (Group 2, n=25), 25° to 50° (Group 3, n=69), 50° to 75° (Group 4, n=49), and more than 75° (Group 5, n=22). Coronal and sagittal curvature measurements were determined by three experienced examiners, using either traditional 2D methods or automatic measurements based on sterEOS 3D reconstructions. Manual measurements were performed three times, and sterEOS 3D

  8. A linear maglev guide for machine tools

    Tieste, K D [Inst. of Mechanics, Univ. of Hannover (Germany); Popp, K [Inst. of Mechanics, Univ. of Hannover (Germany)

    1996-12-31

    Machine tools require linear guides with high slide velocity and very high position accuracy. The three tasks of a linear guide - supporting, guiding and driving - shall be realised by means of active magnetic bearings (AMB). The resulting linear magnetically levitated (maglev) guide has to accomplish the following characteristics: High stiffness, good damping and low noise as well as low heat production. First research on a one degree-of-freedom (DOF) support magnet unit aimed at the development of components and efficient control strategies for the linear maglev guide. The actual research is directed to realise a five DOF linear maglev guide for machine tools without drive to answer the question whether the maglev principle can be used for a linear axis in a machine tool. (orig.)

  9. Foundations of linear and generalized linear models

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  10. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  11. Modelling Loudspeaker Non-Linearities

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  12. IGS polar motion measurement accuracy

    Jim Ray

    2017-11-01

    Full Text Available We elaborate an error budget for the long-term accuracy of IGS (International Global Navigation Satellite System Service polar motion estimates, concluding that it is probably about 25–30 μas (1-sigma overall, although it is not possible to quantify possible contributions (mainly annual that might transfer directly from aliases of subdaily rotational tide errors. The leading sources are biases arising from the need to align daily, observed terrestrial frames, within which the pole coordinates are expressed and which are continuously deforming, to the secular, linear international reference frame. Such biases are largest over spans longer than about a year. Thanks to the very large number of IGS tracking stations, the formal covariance errors are much smaller, around 5 to 10 μas. Large networks also permit the systematic frame-related errors to be more effectively minimized but not eliminated. A number of periodic errors probably also influence polar motion results, mainly at annual, GPS (Global Positioning System draconitic, and fortnightly periods, but their impact on the overall error budget is unlikely to be significant except possibly for annual tidal aliases. Nevertheless, caution should be exercised in interpreting geophysical excitations near any of the suspect periods.

  13. Getting it Right: Study protocol to determine the diagnostic accuracy of a culturally-specific measure to screen for depression in Aboriginal and/or Torres Strait Islander people

    Hackett, Maree L.; Hackett, Maree L.; Farnbach, Sara

    2016-01-01

    for a study aiming to determine the validity, sensitivity and specificity of the culturally adapted 9-item Patient Health Questionnaire (aPHQ-9). Methods and analysis Cross-sectional validation study. A total of 500 people who self-identify as Aboriginal and/or Torres Strait Islander, are ≥18 €...years of age......© Published by the BMJ Publishing Group Limited. Introduction A freely available, culturally valid depression screening tool is required for use by primary care services across Australia to screen for depression in Aboriginal and/or Torres Strait Islander populations. This is the protocol...

  14. Sonography of the chest using linear-array versus sector transducers: Correlation with auscultation, chest radiography, and computed tomography.

    Tasci, Ozlem; Hatipoglu, Osman Nuri; Cagli, Bekir; Ermis, Veli

    2016-07-08

    The primary purpose of our study was to compare the efficacies of two sonographic (US) probes, a high-frequency linear-array probe and a lower-frequency phased-array sector probe in the diagnosis of basic thoracic pathologies. The secondary purpose was to compare the diagnostic performance of thoracic US with auscultation and chest radiography (CXR) using thoracic CT as a gold standard. In total, 55 consecutive patients scheduled for thoracic CT were enrolled in this prospective study. Four pathologic entities were evaluated: pneumothorax, pleural effusion, consolidation, and interstitial syndrome. A portable US scanner was used with a 5-10-MHz linear-array probe and a 1-5-MHz phased-array sector probe. The first probe used was chosen randomly. US, CXR, and auscultation results were compared with the CT results. The linear-array probe had the highest performance in the identification of pneumothorax (83% sensitivity, 100% specificity, and 99% diagnostic accuracy) and pleural effusion (100% sensitivity, 97% specificity, and 98% diagnostic accuracy); the sector probe had the highest performance in the identification of consolidation (89% sensitivity, 100% specificity, and 95% diagnostic accuracy) and interstitial syndrome (94% sensitivity, 93% specificity, and 94% diagnostic accuracy). For all pathologies, the performance of US was superior to those of CXR and auscultation. The linear probe is superior to the sector probe for identifying pleural pathologies, whereas the sector probe is superior to the linear probe for identifying parenchymal pathologies. Thoracic US has better diagnostic performance than CXR and auscultation for the diagnosis of common pathologic conditions of the chest. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:383-389, 2016. © 2016 Wiley Periodicals, Inc.

  15. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  16. CERN balances linear collider studies

    ILC Newsline

    2011-01-01

    The forces behind the two most mature proposals for a next-generation collider, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) study, have been steadily coming together, with scientists from both communities sharing ideas and information across the technology divide. In a support of cooperation between the two, CERN in Switzerland, where most CLIC research takes place, recently converted the project-specific position of CLIC Study Leader to the concept-based Linear Collider Study Leader.   The scientist who now holds this position, Steinar Stapnes, is charged with making the linear collider a viable option for CERN’s future, one that could include either CLIC or the ILC. The transition to more involve the ILC must be gradual, he said, and the redefinition of his post is a good start. Though not very much involved with superconducting radiofrequency (SRF) technology, where ILC researchers have made significant advances, CERN participates in many aspect...

  17. Order-constrained linear optimization.

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  18. Accuracy of the WHO’s body mass index cut-off points to measure gender- and age-specific obesity in middle-aged adults living in the city of Rio de Janeiro, Brazil

    Wollner Materko

    2017-09-01

    Full Text Available Introduction. Obesity is defined by the World Health Organization (WHO as a disease characterized by the excessive accumulation of body fat. Obesity is considered a public health problem, leading to serious social, psychological and physical problems. However, the appropriate cut-off point of body mass index (BMI based on body fat percentage (BF% for classifying an individual as obese in middle-aged adults living in Rio de Janeiro remains unclear. Materials and methods. This was a prospective cross-sectional study comprising of 856 adults (413 men and 443 women living in Rio de Janeiro, Brazil ranging from 30-59 years of age. The data were collected over a two year period (2010-2011, and all participants were underwent anthropometric evaluation. The gold standard was the percentage of body fat estimated by bioelectrical impedance analysis. The optimal sensitivity and specificity were attained by adjusting BMI cut-off values to predict obesity based on the WHO criteria: BF% >25% in men and >35% in women, according to the receiver operating characteristic curve (ROC analysis adjusted for age and for the whole group.Results. The BMI cut-offs for predicting BF% were 29.9 kg/m2 in men and 24.9 kg/m2 in women. Conclusions. The BMI that corresponded to a BF% previously defining obesity was similar to that of other Western populations for men but not for women. Furthermore, gender and age specific cut-off values are recommended in this population.

  19. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Assessment of the Roche Linear Array HPV Genotyping Test within the VALGENT framework.

    Xu, Lan; Oštrbenk, Anja; Poljak, Mario; Arbyn, Marc

    2018-01-01

    Cervical cancer screening programs are switching from cytology-based screening to high-risk (hr) HPV testing. Only clinically validated tests should be used in clinical practice. To assess the clinical performance of the Roche Linear Array HPV genotyping test (Linear Array) within the VALGENT-3 framework. The VALGENT framework is designed for comprehensive comparison and clinical validation of HPV tests that have limited to extended genotyping capacity. The Linear Array enables type-specific detection of 37 HPV types. For the purpose of this study, Linear Array results were designated as positive only if one of the 13 hrHPV types also included in the Hybrid Capture 2 (HC2) was detected. The VALGENT-3 framework comprised 1600 samples obtained from Slovenian women (1300 sequential cases from routine cervical cancer screening enriched with 300 cytological abnormal samples). Sensitivity for cervical intraepithelial neoplasia of grade 2 or worse (CIN2+) (n=127) and specificity for Linear Array and for HC2 and non-inferiority of Linear Array relative to HC2 was checked. In addition, the prevalence of separate hrHPV types in the screening population, as well as the concordance for presence of HPV16, HPV18 and other hrHPV types between Linear Array and the Abbott RealTime High Risk HPV test (RealTime) were assessed. The clinical sensitivity and specificity for CIN2+ of the Linear Array in the total study population was 97.6% (95% CI, 93.3-99.5%) and 91.7% (95% CI, 90.0-93.2%), respectively. The relative sensitivity and specificity of Linear Array vs HC2 was 1.02 [95% CI, 0.98-1.05, (pLinear Array in the screening population was 10.5% (95% CI, 8.9-12.3%) with HPV16 and HPV18 detected in 2.3% and 0.9% of the samples, respectively. Excellent agreement for presence or absence of HPV16, HPV18 and other hrHPV between Linear Array and RealTime was observed. Linear Array showed similar sensitivity with higher specificity to detect CIN2+ compared to HC2. Detection of 13 hrHPV types

  1. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  2. Parameterized Linear Longitudinal Airship Model

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  3. Computed tomography angiogram. Accuracy in renal surgery

    Rabah, Danny M.; Al-Hathal, Naif; Al-Fuhaid, Turki; Raza, Sayed; Al-Yami, Fahad; Al-Taweel, Waleed; Alomar, Mohamed; Al-Nagshabandi, Nizar

    2009-01-01

    The objective of this study was to determine the sensitivity and specificity of computed tomography angiogram (CTA) in detecting number and location of renal arteries and veins as well as crossing vessels causing uretero-pelvic junction obstruction (UPJO), and to determine if this can be used in decision-making algorithms for treatment of UPJO. A prospective study was carried out in patients undergoing open, laparoscopic and robotic renal surgery from April 2005 until October 2006. All patients were imaged using CTA with 1.25 collimation of arterial and venous phases. Each multi-detector CTA was then read by one radiologist and his results were compared prospectively with the actual intra-operative findings. Overall, 118 patients were included. CTA had 93% sensitivity, 77% specificity and 90% overall accuracy for detecting a single renal artery, and 76% sensitivity, 92% specificity and 90% overall accuracy for detecting two or more renal arteries (Pearson χ 2 =0.001). There was 95% sensitivity, 84% specificity and 85% overall accuracy for detecting the number of renal veins. CTA had 100% overall accuracy in detecting early dividing renal artery (defined as less than 1.5 cm branching from origin), and 83.3% sensitivity, specificity and overall accuracy in detecting crossing vessels at UPJ. The percentage of surgeons stating CTA to be helpful as pre-operative diagnostic tool was 85%. Computed tomography angiogram is simple, quick and can provide an accurate pre-operative renal vascular anatomy in terms of number and location of renal vessels, early dividing renal arteries and crossing vessels at UPJ. (author)

  4. Laser measuring scanners and their accuracy limits

    Jablonski, Ryszard

    1993-09-01

    Scanning methods have gained the greater importance for some years now due to a short measuring time and wide range of application in flexible manufacturing processes. This paper is a summing up of the autho?s creative scientific work in the field of measuring scanners. The research conducted allowed to elaborate the optimal configurations of measuring systems based on the scanning method. An important part of the work was the analysis of a measuring scanner - as a transducer of an angle rotation into the linear displacement which resulted in obtaining its much higher accuracy and finally in working out a measuring scanner eliminating the use of an additional reference standard. The completion of the work is an attempt to determine an attainable accuracy limit of scanning measurement of both length and angle. Using a high stability deflector and a corrected scanning lens one can obtain the angle determination over 30 (or 2 mm) to an accuracy 0 (or 0 tm) when the measuring rate is 1000 Hz or the range d60 (4 mm) with accuracy 0 " (0 jim) and measurement frequency 6 Hz.

  5. Evaluating measurement accuracy a practical approach

    Rabinovich, Semyon G

    2013-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. From developing the theory of indirect measurements to proposing new methods of reduction, transformation, and enumeration, this work encompasses the full range of measurement data processing. It includes many examples that illustrate the application of general theory to typical problems encountered in measurement practice. As a result, the book serves as an inclusive reference work for data processing of all types of measurements: single and multiple, combined and simultaneous, direct (both linear and nonlinear), and indirect (both dependent and independent). It is a working tool for experimental scientists and engineers of all disciplines who work with instrumentation. It is also a good resource for natural science and engineering students and for technicians performing measurements in industry. A key feature of the book is...

  6. Accuracy Assessment of Different Digital Surface Models

    Ugur Alganci

    2018-03-01

    Full Text Available Digital elevation models (DEMs, which can occur in the form of digital surface models (DSMs or digital terrain models (DTMs, are widely used as important geospatial information sources for various remote sensing applications, including the precise orthorectification of high-resolution satellite images, 3D spatial analyses, multi-criteria decision support systems, and deformation monitoring. The accuracy of DEMs has direct impacts on specific calculations and process chains; therefore, it is important to select the most appropriate DEM by considering the aim, accuracy requirement, and scale of each study. In this research, DSMs obtained from a variety of satellite sensors were compared to analyze their accuracy and performance. For this purpose, freely available Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER 30 m, Shuttle Radar Topography Mission (SRTM 30 m, and Advanced Land Observing Satellite (ALOS 30 m resolution DSM data were obtained. Additionally, 3 m and 1 m resolution DSMs were produced from tri-stereo images from the SPOT 6 and Pleiades high-resolution (PHR 1A satellites, respectively. Elevation reference data provided by the General Command of Mapping, the national mapping agency of Turkey—produced from 30 cm spatial resolution stereo aerial photos, with a 5 m grid spacing and ±3 m or better overall vertical accuracy at the 90% confidence interval (CI—were used to perform accuracy assessments. Gross errors and water surfaces were removed from the reference DSM. The relative accuracies of the different DSMs were tested using a different number of checkpoints determined by different methods. In the first method, 25 checkpoints were selected from bare lands to evaluate the accuracies of the DSMs on terrain surfaces. In the second method, 1000 randomly selected checkpoints were used to evaluate the methods’ accuracies for the whole study area. In addition to the control point approach, vertical cross

  7. Direct Linear Transformation Method for Three-Dimensional Cinematography

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  8. Meditation experience predicts introspective accuracy.

    Kieran C R Fox

    Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.

  9. A linear programming manual

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  10. Linear shaped charge

    Peterson, David; Stofleth, Jerome H.; Saul, Venner W.

    2017-07-11

    Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.

  11. Classifying Linear Canonical Relations

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  12. Optimization of drift gases for accuracy in pressurized drift tubes

    Kirchner, J J; Dinner, A R; Fidkowski, K J; Wyatt, J H

    2001-01-01

    Modern detectors such as ATLAS use pressurized drift tubes to minimize diffusion and achieve high coordinate accuracy. However, the coordinate accuracy depends on the exact knowledge of converting measured times into coordinates. Linear space-time relationships are best for reconstruction, but difficult to achieve in the $E \\propto \\frac{1}{r}$ field. Previous mixtures, which contained methane or other organic quenchers, are disfavored because of ageing problems. From our studies of nitrogen and carbon dioxide, two mixtures with only small deviations from linearity were determined and measured. Scaling laws for different pressures and magnetic fields are also given.

  13. Optimization of drift gases for accuracy in pressurized drift tubes

    Kirchner, J.J.; Becker, U.J.; Dinner, R.B.; Fidkowski, K.J.; Wyatt, J.H.

    2001-01-01

    Modern detectors such as ATLAS use pressurized drift tubes to minimize diffusion and achieve high coordinate accuracy. However, the coordinate accuracy depends on the exact knowledge of converting measured times into coordinates. Linear space-time relationships are best for reconstruction, but difficult to achieve in the E∝1/r field. Previous mixtures, which contained methane or other organic quenchers, are disfavored because of ageing problems. From our studies of nitrogen and carbon dioxide, two mixtures with only small deviations from linearity were determined and measured. Scaling laws for different pressures and magnetic fields are also given

  14. Linear-Algebra Programs

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  15. Surgical accuracy of three-dimensional virtual planning

    Stokbro, Kasper; Aagaard, Esben; Torkov, Peter

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy...... and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation...

  16. Photon caliper to achieve submillimeter positioning accuracy

    Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan

    2017-09-01

    The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.

  17. Linear programming foundations and extensions

    Vanderbei, Robert J

    2001-01-01

    Linear Programming: Foundations and Extensions is an introduction to the field of optimization. The book emphasizes constrained optimization, beginning with a substantial treatment of linear programming, and proceeding to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. The book is carefully written. Specific examples and concrete algorithms precede more abstract topics. Topics are clearly developed with a large number of numerical examples worked out in detail. Moreover, Linear Programming: Foundations and Extensions underscores the purpose of optimization: to solve practical problems on a computer. Accordingly, the book is coordinated with free efficient C programs that implement the major algorithms studied: -The two-phase simplex method; -The primal-dual simplex method; -The path-following interior-point method; -The homogeneous self-dual methods. In addition, there are online JAVA applets that illustrate various pivot rules and variants of the simplex m...

  18. Accuracy Assessment and Analysis for GPT2

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  19. A comparative study of linear measurements on facial skeleton with frontal and lateral cephalogram.

    Gandikota, Chandra Sekhar; Rayapudi, Naveen; Challa, Padma Latha; Juvvadi, Shubhaker Rao; Yudhister, P V; Rao, Gutti Hariprasad

    2012-04-01

    To compare the accuracy of linear measurements on lateral and frontal cephalograms with gold standard skull measurements. Based on the specific criteria including reliable occlusion and condyles fitting in glenoid fossa, 15 dry human skulls were selected from a larger collection. Lateral and frontal cephalograms were taken of each skull by standardized methods. Steel ball bearings were used to identify the anatomic landmarks. Linear measurements in midsagittal plane were made on all three records. Intraclass correlation coefficients, Pearson's correlation coefficient and regression constant were calculated to assess the records simultaneously. The frontal cephalometric measurements showed high correlation to the direct skull measurements (Pearson's coefficient 0.943linear measurements of the lateral cephalometric record are greater than the corresponding frontal cephalometric images. The overall findings of the present study showed that the frontal cephalometric measurements are closely related to the direct skull measures.

  20. A comparative study of linear measurements on facial skeleton with frontal and lateral cephalogram

    Chandra Sekhar Gandikota

    2012-01-01

    Full Text Available Objective: To compare the accuracy of linear measurements on lateral and frontal cephalograms with gold standard skull measurements . Materials and Methods: Based on the specific criteria including reliable occlusion and condyles fitting in glenoid fossa, 15 dry human skulls were selected from a larger collection. Lateral and frontal cephalograms were taken of each skull by standardized methods. Steel ball bearings were used to identify the anatomic landmarks. Linear measurements in midsagittal plane were made on all three records. Intraclass correlation coefficients, Pearson′s correlation coefficient and regression constant were calculated to assess the records simultaneously. Results: The frontal cephalometric measurements showed high correlation to the direct skull measurements (Pearson′s coefficient 0.943linear measurements of the lateral cephalometric record are greater than the corresponding frontal cephalometric images. The overall findings of the present study showed that the frontal cephalometric measurements are closely related to the direct skull measures.

  1. Linear signal noise summer accurately determines and controls S/N ratio

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  2. 40 CFR 90.314 - Analyzer accuracy and specifications.

    2010-07-01

    ... zero and calibration or span gases over any 10-second period must not exceed two percent of full-scale chart deflection on all ranges used. (3) Zero drift. The analyzer zero-response drift during a one-hour period must be less than two percent of full-scale chart deflection on the lowest range used. The zero...

  3. 40 CFR 91.314 - Analyzer accuracy and specifications.

    2010-07-01

    .... (3) Zero drift. The analyzer zero-response drift during a one-hour period must be less than two percent of full-scale chart deflection on the lowest range used. The zero-response is defined as the mean... calibration or span gas. (2) Noise. The analyzer peak-to-peak response to zero and calibration or span gases...

  4. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  5. Non linear system become linear system

    Petre Bucur

    2007-01-01

    Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.

  6. Linear motor coil assembly and linear motor

    2009-01-01

    An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially

  7. Accuracy of prehospital transport time estimation.

    Wallace, David J; Kahn, Jeremy M; Angus, Derek C; Martin-Gill, Christian; Callaway, Clifton W; Rea, Thomas D; Chhatwal, Jagpreet; Kurland, Kristen; Seymour, Christopher W

    2014-01-01

    Estimates of prehospital transport times are an important part of emergency care system research and planning; however, the accuracy of these estimates is unknown. The authors examined the accuracy of three estimation methods against observed transport times in a large cohort of prehospital patient transports. This was a validation study using prehospital records in King County, Washington, and southwestern Pennsylvania from 2002 to 2006 and 2005 to 2011, respectively. Transport time estimates were generated using three methods: linear arc distance, Google Maps, and ArcGIS Network Analyst. Estimation error, defined as the absolute difference between observed and estimated transport time, was assessed, as well as the proportion of estimated times that were within specified error thresholds. Based on the primary results, a regression estimate was used that incorporated population density, time of day, and season to assess improved accuracy. Finally, hospital catchment areas were compared using each method with a fixed drive time. The authors analyzed 29,935 prehospital transports to 44 hospitals. The mean (± standard deviation [±SD]) absolute error was 4.8 (±7.3) minutes using linear arc, 3.5 (±5.4) minutes using Google Maps, and 4.4 (±5.7) minutes using ArcGIS. All pairwise comparisons were statistically significant (p Google Maps, and 11.6 [±10.9] minutes for ArcGIS). Estimates were within 5 minutes of observed transport time for 79% of linear arc estimates, 86.6% of Google Maps estimates, and 81.3% of ArcGIS estimates. The regression-based approach did not substantially improve estimation. There were large differences in hospital catchment areas estimated by each method. Route-based transport time estimates demonstrate moderate accuracy. These methods can be valuable for informing a host of decisions related to the system organization and patient access to emergency medical care; however, they should be employed with sensitivity to their limitations.

  8. Predicting birth weight with conditionally linear transformation models.

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  9. Modelling a linear PM motor including magnetic saturation

    Polinder, H.; Slootweg, J.G.; Compter, J.C.; Hoeijmakers, M.J.

    2002-01-01

    The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of the high force density, robustness and accuracy. The paper describes the modelling of a linear PM motor applied in, for example, wafer steppers, including magnetic saturation. This is important

  10. Linear collider: a preview

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.

  11. Basic linear algebra

    Blyth, T S

    2002-01-01

    Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...

  12. Linear collider: a preview

    Wiedemann, H.

    1981-11-01

    Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center

  13. Petri Nets as Models of Linear Logic

    Engberg, Uffe Henrik; Winskel, Glynn

    1990-01-01

    The chief purpose of this paper is to appraise the feasibility of Girad's linear logic as a specification language for parallel processes. To this end we propose an interpretation of linear logic in Petri nets, with respect to which we investigate the expressive power of the logic...

  14. Balanced truncation for linear switched systems

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2013-01-01

    In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...

  15. Matrices and linear transformations

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  16. Efficient Non Linear Loudspeakers

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  17. Test Expectancy Affects Metacomprehension Accuracy

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  18. Linear models with R

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  19. Linear integrated circuits

    Carr, Joseph

    1996-01-01

    The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa

  20. Fault tolerant linear actuator

    Tesar, Delbert

    2004-09-14

    In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.

  1. Superconducting linear accelerator cryostat

    Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.

    1984-01-01

    A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)

  2. Linear regression in astronomy. I

    Isobe, Takashi; Feigelson, Eric D.; Akritas, Michael G.; Babu, Gutti Jogesh

    1990-01-01

    Five methods for obtaining linear regression fits to bivariate data with unknown or insignificant measurement errors are discussed: ordinary least-squares (OLS) regression of Y on X, OLS regression of X on Y, the bisector of the two OLS lines, orthogonal regression, and 'reduced major-axis' regression. These methods have been used by various researchers in observational astronomy, most importantly in cosmic distance scale applications. Formulas for calculating the slope and intercept coefficients and their uncertainties are given for all the methods, including a new general form of the OLS variance estimates. The accuracy of the formulas was confirmed using numerical simulations. The applicability of the procedures is discussed with respect to their mathematical properties, the nature of the astronomical data under consideration, and the scientific purpose of the regression. It is found that, for problems needing symmetrical treatment of the variables, the OLS bisector performs significantly better than orthogonal or reduced major-axis regression.

  3. Stability and complexity of small random linear systems

    Hastings, Harold

    2010-03-01

    We explore the stability of the small random linear systems, typically involving 10-20 variables, motivated by dynamics of the world trade network and the US and Canadian power grid. This report was prepared as an account of work sponsored by an agency of the US Government. Neither the US Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the US Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the US Government or any agency thereof.

  4. Linear accelerator stereotactic radiosurgery for trigeminal neuralgia.

    Varela-Lema, Leonor; Lopez-Garcia, Marisa; Maceira-Rozas, Maria; Munoz-Garzon, Victor

    2015-01-01

    Stereotactic radiosurgery is accepted as an alternative for patients with refractory trigeminal neuralgia, but existing evidence is fundamentally based on the Gamma Knife, which is a specific device for intracranial neurosurgery, available in few facilities. Over the last decade it has been shown that the use of linear accelerators can achieve similar diagnostic accuracy and equivalent dose distribution. To assess the effectiveness and safety of linear-accelerator stereotactic radiosurgery for the treatment of patients with refractory trigeminal neuralgia. We carried out a systematic search of the literature in the main electronic databases (PubMed, Embase, ISI Web of Knowledge, Cochrane, Biomed Central, IBECS, IME, CRD) and reviewed grey literature. All original studies on the subject published in Spanish, French, English, and Portuguese were eligible for inclusion. The selection and critical assessment was carried out by 2 independent reviewers based on pre-defined criteria. In view of the impossibility of carrying out a pooled analysis, data were analyzed in a qualitative way. Eleven case series were included. In these, satisfactory pain relief (BIN I-IIIb or reduction in pain = 50) was achieved in 75% to 95.7% of the patients treated. The mean time to relief from pain ranged from 8.5 days to 3.8 months. The percentage of patients who presented with recurrences after one year of follow-up ranged from 5% to 28.8%. Facial swelling or hypoesthesia, mostly of a mild-moderate grade appeared in 7.5% - 51.9% of the patients. Complete anaesthesia dolorosa was registered in only study (5.3%). Cases of hearing loss (2.5%), brainstem edema (5.8%), and neurotrophic keratoplasty (3.5%) were also isolated. The results suggest that stereotactic radiosurgery with linear accelerators could constitute an effective and safe therapeutic alternative for drug-resistant trigeminal neuralgia. However, existing studies leave important doubts as to optimal treatment doses or the

  5. Two linearization methods for atmospheric remote sensing

    Doicu, A.; Trautmann, T.

    2009-01-01

    We present two linearization methods for a pseudo-spherical atmosphere and general viewing geometries. The first approach is based on an analytical linearization of the discrete ordinate method with matrix exponential and incorporates two models for matrix exponential calculation: the matrix eigenvalue method and the Pade approximation. The second method referred to as the forward-adjoint approach is based on the adjoint radiative transfer for a pseudo-spherical atmosphere. We provide a compact description of the proposed methods as well as a numerical analysis of their accuracy and efficiency.

  6. Linearity enigmas in ecology

    Patten, B.C.

    1983-04-01

    Two issues concerning linearity or nonlinearity of natural systems are considered. Each is related to one of the two alternative defining properties of linear systems, superposition and decomposition. Superposition exists when a linear combination of inputs to a system results in the same linear combination of outputs that individually correspond to the original inputs. To demonstrate this property it is necessary that all initial states and inputs of the system which impinge on the output in question be included in the linear combination manipulation. As this is difficult or impossible to do with real systems of any complexity, nature appears nonlinear even though it may be linear. A linear system that displays nonlinear behavior for this reason is termed pseudononlinear. The decomposition property exists when the dynamic response of a system can be partitioned into an input-free portion due to state plus a state-free portion due to input. This is a characteristic of all linear systems, but not of nonlinear systems. Without the decomposition property, it is not possible to distinguish which portions of a system's behavior are due to innate characteristics (self) vs. outside conditions (environment), which is an important class of questions in biology and ecology. Some philosophical aspects of these findings are then considered. It is suggested that those ecologists who hold to the view that organisms and their environments are separate entities are in effect embracing a linear view of nature, even though their belief systems and mathematical models tend to be nonlinear. On the other hand, those who consider that organism-environment complex forms a single inseparable unit are implictly involved in non-linear thought, which may be in conflict with the linear modes and models that some of them use. The need to rectify these ambivalences on the part of both groups is indicated.

  7. Diagnostic accuracy of MRCP in choledocholithiasis

    Guarise, Alessandro; Mainardi, Paride; Baltieri, Susanna; Faccioli, Niccolo'

    2005-01-01

    Purpose: To evaluate the accuracy of MRCP in diagnosing choledocholithiasis considering Endoscopic Retrograde Cholangiopancreatography (ERCP) as the gold standard. To compare the results achieved during the first two years of use (1999-2000) of Magnetic Resonance Cholangiopancreatography (MRCP) in patients with suspected choledocholithiasis with those achieved during the following two years (2001-2002) in order to establish the repeatability and objectivity of MRCP results. Materials and methods: One hundred and seventy consecutive patients underwent MRCP followed by ERCP within 72 h. In 22/170 (13%) patients ERCP was unsuccessful for different reasons. MRCP was performed using a 1.5 T magnet with both multi-slice HASTE sequences and thick-slice projection technique. Choledocholithiasis was diagnosed in the presence of signal void images in the dependent portion of the duct surrounded by hyperintense bile and detected at least in two projections. The MRCP results, read independently from the ERCP results, were compared in two different and subsequent periods. Results: ERCP confirmed choledocholithiasis in 87 patients. In these cases the results of MRCP were the following: 78 true positives, 53 true negatives, 7 false positives, and 9 false negatives. The sensitivity, specificity and accuracy were 90%, 88% and 89%, respectively. After the exclusion of stones with diameters smaller than 6 mm, the sensitivity, specificity and accuracy were 100%, 99% and 99%, respectively. MRCP accuracy was related to the size of the stones. There was no significant statistical difference between the results obtained in the first two-year period and those obtained in the second period. Conclusions: MRCP i sufficiently accurate to replace ERCP in patients with suspected choledocholithiasis. The results are related to the size of stones. The use of well-defined radiological signs allows good diagnostic accuracy independent of the learning curve [it

  8. INFLUENCE OF STRUCTURE COMPONENTS ON MACHINE TOOL ACCURACY

    ConstantinSANDU

    2017-11-01

    Full Text Available For machine tools, the accuracy of the parts of the machine tool structure (after roughing should be subject to relief and natural or artificial aging. The performance of the current accuracy of machine tools as linearity or flatness was higher than 5 μm/m. Under this value there are great difficulties. The performance of the structure of the machine tools in the manufacture of structural parts of machine tools, with a flatness accuracy that the linearity of about 2 μm/m, are significant deviations form of their half-finished. This article deals with the influence of errors of form of semifinished and machined parts on them, on their shape and especially what happens to structure machine tools when the components of the structure were assembling this.

  9. Linear colliders - prospects 1985

    Rees, J.

    1985-06-01

    We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs

  10. The SLAC linear collider

    Richter, B.

    1985-01-01

    A report is given on the goals and progress of the SLAC Linear Collider. The author discusses the status of the machine and the detectors and give an overview of the physics which can be done at this new facility. He also gives some ideas on how (and why) large linear colliders of the future should be built

  11. Linear Programming (LP)

    Rogner, H.H.

    1989-01-01

    The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig

  12. Racetrack linear accelerators

    Rowe, C.H.; Wilton, M.S. de.

    1979-01-01

    An improved recirculating electron beam linear accelerator of the racetrack type is described. The system comprises a beam path of four straight legs with four Pretzel bending magnets at the end of each leg to direct the beam into the next leg of the beam path. At least one of the beam path legs includes a linear accelerator. (UK)

  13. The relationships between rugby ground pass accuracy and ...

    Kathryn van Boom

    2 Department of Physiotherapy, School of Therapeutic Sciences, Faculty of. Health Sciences ... upper body kinematics of the players, specifically the rotations of the torso and ..... relationships between the body movements and pass accuracy ...

  14. Linearization of the bradford protein assay.

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  15. Semidefinite linear complementarity problems

    Eckhardt, U.

    1978-04-01

    Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de

  16. Linear algebra done right

    Axler, Sheldon

    2015-01-01

    This best-selling textbook for a second course in linear algebra is aimed at undergrad math majors and graduate students. The novel approach taken here banishes determinants to the end of the book. The text focuses on the central goal of linear algebra: understanding the structure of linear operators on finite-dimensional vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. The third edition contains major improvements and revisions throughout the book. More than 300 new exercises have been added since the previous edition. Many new examples have been added to illustrate the key ideas of linear algebra. New topics covered in the book include product spaces, quotient spaces, and dual spaces. Beautiful new formatting creates pages with an unusually pleasant appearance in both print and electronic versions. No prerequisites are assumed other than the ...

  17. Handbook on linear motor application

    1988-10-01

    This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.

  18. FIELD ACCURACY TEST OF RPAS PHOTOGRAMMETRY

    P. Barry

    2013-08-01

    Full Text Available Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS. We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This

  19. Gauss Elimination: Workhorse of Linear Algebra.

    1995-08-05

    linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also

  20. Linear ubiquitination signals in adaptive immune responses.

    Ikeda, Fumiyo

    2015-07-01

    Ubiquitin can form eight different linkage types of chains using the intrinsic Met 1 residue or one of the seven intrinsic Lys residues. Each linkage type of ubiquitin chain has a distinct three-dimensional topology, functioning as a tag to attract specific signaling molecules, which are so-called ubiquitin readers, and regulates various biological functions. Ubiquitin chains linked via Met 1 in a head-to-tail manner are called linear ubiquitin chains. Linear ubiquitination plays an important role in the regulation of cellular signaling, including the best-characterized tumor necrosis factor (TNF)-induced canonical nuclear factor-κB (NF-κB) pathway. Linear ubiquitin chains are specifically generated by an E3 ligase complex called the linear ubiquitin chain assembly complex (LUBAC) and hydrolyzed by a deubiquitinase (DUB) called ovarian tumor (OTU) DUB with linear linkage specificity (OTULIN). LUBAC linearly ubiquitinates critical molecules in the TNF pathway, such as NEMO and RIPK1. The linear ubiquitin chains are then recognized by the ubiquitin readers, including NEMO, which control the TNF pathway. Accumulating evidence indicates an importance of the LUBAC complex in the regulation of apoptosis, development, and inflammation in mice. In this article, I focus on the role of linear ubiquitin chains in adaptive immune responses with an emphasis on the TNF-induced signaling pathways. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Studies on the diagnostic accuracy of lymphography

    Luening, M.; Stargardt, A.; Abet, L.

    1979-01-01

    Contradictory reports in the literature on the reliability of lymphography stimulated the authors to test the diagnostic accuracy, employing methods which are approximately analogous to practice, using carcinoma of the cervix as the model on which the study was carried out. Using 21 observers it was found that there was no correlation between their experience and on-target accuracy of the diagnosis. Good observers obtained an accuracy of 85% with good proportions between sensitivity in the recognition of detail, specificity and readiness to arrive at a decision on the basis of discriminatory ability. With the help of the concept of the ROC curves, the position taken up by the observers in respect of diagnostic decisions, and a complex manner of assessing the various characteristic factors determining diagnostic accuracy, are demonstrated. This form of test, which permits manipulation of different variants of diagnosis, is recommended, among other things, for performance control at the end of training and continuing education courses in other fields of x-ray diagnosis as well. (orig.) [de

  2. JET VELOCITY OF LINEAR SHAPED CHARGES

    Vječislav Bohanek

    2012-12-01

    Full Text Available Shaped explosive charges with one dimension significantly larger than the other are called linear shaped charges. Linear shaped charges are used in various industries and are applied within specific technologies for metal cutting, such as demolition of steel structures, separating spent rocket fuel tanks, demining, cutting holes in the barriers for fire service, etc. According to existing theories and models efficiency of linear shaped charges depends on the kinetic energy of the jet which is proportional to square of jet velocity. The original method for measuring velocity of linear shaped charge jet is applied in the aforementioned research. Measurements were carried out for two different linear materials, and the results are graphically presented, analysed and compared. Measurement results show a discrepancy in the measured velocity of the jet for different materials with the same ratio between linear and explosive mass (M/C per unit of surface, which is not described by presented models (the paper is published in Croatian.

  3. Performance test of 100 W linear compressor

    Ko, J; Ko, D. Y.; Park, S. J.; Kim, H. B.; Hong, Y. J.; Yeom, H. K. [Korea Institute of Machinery and Materials, Daejeon(Korea, Republic of)

    2013-09-15

    In this paper, we present test results of developed 100 W class linear compressor for Stirling-type pulse tube refrigerator. The fabricated linear compressor has dual-opposed configuration, free piston and moving magnet type linear motor. Power transfer, efficiency and required pressure waveform are predicted with designed and measured specifications. In experiments, room temperature test with flow impedance is conducted to evaluate performance of developed linear compressor. Flow impedance is loaded to compressor with metering valve for flow resistance, inertance tube for flow inertance and buffer volumes for flow compliance. Several operating parameters such as input voltage, current, piston displacement and pressure wave are measured for various operating frequency and fixed input current level. Behaviors of dynamics and performance of linear compressor as varying flow impedance are discussed with measured experimental results. The developed linear compressor shows 124 W of input power, 86 % of motor efficiency and 60 % of compressor efficiency at its resonant operating condition.

  4. Test expectancy affects metacomprehension accuracy.

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  5. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  6. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)

  7. Forecast Accuracy Uncertainty and Momentum

    Bing Han; Dong Hong; Mitch Warachka

    2009-01-01

    We demonstrate that stock price momentum and earnings momentum can result from uncertainty surrounding the accuracy of cash flow forecasts. Our model has multiple information sources issuing cash flow forecasts for a stock. The investor combines these forecasts into an aggregate cash flow estimate that has minimal mean-squared forecast error. This aggregate estimate weights each cash flow forecast by the estimated accuracy of its issuer, which is obtained from their past forecast errors. Mome...

  8. Linearizing W-algebras

    Krivonos, S.O.; Sorin, A.S.

    1994-06-01

    We show that the Zamolodchikov's and Polyakov-Bershadsky nonlinear algebras W 3 and W (2) 3 can be embedded as subalgebras into some linear algebras with finite set of currents. Using these linear algebras we find new field realizations of W (2) 3 and W 3 which could be a starting point for constructing new versions of W-string theories. We also reveal a number of hidden relationships between W 3 and W (2) 3 . We conjecture that similar linear algebras can exist for other W-algebra as well. (author). 10 refs

  9. Matrices and linear algebra

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  10. Linearity in Process Languages

    Nygaard, Mikkel; Winskel, Glynn

    2002-01-01

    The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....

  11. Elements of linear space

    Amir-Moez, A R; Sneddon, I N

    1962-01-01

    Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a

  12. Applied linear regression

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  13. Diagnostic accuracy of sonoelastography in different diseases

    Iqra Manzoor

    2018-03-01

    Full Text Available The objective of this study was to evaluate the diagnostic accuracy of sonoelastography in patients of primary and secondary health care settings. Google scholar, PubMed, Medline, Medscape, Wikipedia and NCBI were searched in October 2017 for all original studies and review articles to identify the relevant material. Two reviewers independently selected articles for evaluation of the diagnostic accuracy of sonoelastography in different diseases based on titles and abstracts retrieved by the literature search. The accuracy of sonoelastography in different diseases was used as the index text, while B-mode sonography, micro pure imaging, surgery and histological findings were used as reference texts. Superficial lymph nodes, neck nodules, malignancy in thyroid nodules, benign and malignant cervical lymph nodes, thyroid nodules, prostate carcinoma, benign and malignant breast abnormalities, liver diseases, parotid and salivary gland masses, pancreatic masses, musculoskeletal diseases and renal disorders were target conditions. The data extracted by the two reviewers concerning selected study characteristics and results were presented in tables and figures. In total, 46 studies were found for breast masses, lymph nodes, prostate carcinoma, liver diseases, salivary and parotid gland diseases, pancreatic masses, musculoskeletal diseases and renal diseases, and the overall sensitivity of sonoelastography in diagnosing all these diseases was 83.14% while specificity was 81.41%. This literature review demonstrates that sonoelastography is characterized by high sensitivity and specificity in diagnosing different disorders of the body.

  14. Social Power Increases Interoceptive Accuracy

    Mehrad Moeini-Jazani

    2017-08-01

    Full Text Available Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research.

  15. Linear system theory

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  16. Trait Perception Accuracy and Acquaintance Within Groups: Tracking Accuracy Development.

    Brown, Jill A; Bernieri, Frank

    2017-05-01

    Previous work on trait perception has evaluated accuracy at discrete stages of relationships (e.g., strangers, best friends). A relatively limited body of literature has investigated changes in accuracy as acquaintance within a dyad or group increases. Small groups of initially unacquainted individuals spent more than 30 hr participating in a wide range of activities designed to represent common interpersonal contexts (e.g., eating, traveling). We calculated how accurately each participant judged others in their group on the big five traits across three distinct points within the acquaintance process: zero acquaintance, after a getting-to-know-you conversation, and after 10 weeks of interaction and activity. Judgments of all five traits exhibited accuracy above chance levels after 10 weeks. An examination of the trait rating stability revealed that much of the revision in judgments occurred not over the course of the 10-week relationship as suspected, but between zero acquaintance and the getting-to-know-you conversation.

  17. High accuracy mantle convection simulation through modern numerical methods

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  18. THE ACCURACY OF Hβ CCD PHOTOMETRY

    C. Kim

    1994-12-01

    Full Text Available We have undertaken CCD observations of field standard stars with Hβ photometric system to investigate the reliability of Hβ CCD photometry. Flat fielding with dome flat and sky flat for Hβw and Hβn filter was compared with that of B filter in UBV system and, from these, we have not found any difference. It was confirmed that there is a good linear relationship between our Hβ values observed with 2.3m reflector and standard values. However, Hβ values observed with 60cm reflector at Sobaeksan Astronomy Observatory showed very poor relationship. To investigate the accuracy of Hβ CCD photometry for fainter objects, open cluster NGC2437 was observed and reduced with DoPHOT, and the results were compared with those for photoelectric photometry of Stetson (1981.

  19. Gain scheduled linear quadratic control for quadcopter

    Okasha, M.; Shah, J.; Fauzi, W.; Hanouf, Z.

    2017-12-01

    This study exploits the dynamics and control of quadcopters using Linear Quadratic Regulator (LQR) control approach. The quadcopter’s mathematical model is derived using the Newton-Euler method. It is a highly manoeuvrable, nonlinear, coupled with six degrees of freedom (DOF) model, which includes aerodynamics and detailed gyroscopic moments that are often ignored in many literatures. The linearized model is obtained and characterized by the heading angle (i.e. yaw angle) of the quadcopter. The adopted control approach utilizes LQR method to track several reference trajectories including circle and helix curves with significant variation in the yaw angle. The controller is modified to overcome difficulties related to the continuous changes in the operating points and eliminate chattering and discontinuity that is observed in the control input signal. Numerical non-linear simulations are performed using MATLAB and Simulink to illustrate to accuracy and effectiveness of the proposed controller.

  20. Accuracy of a computed tomography scanning procedure to manufacture digital models.

    Darroudi, A.M.; Kuijpers-Jagtman, A.M.; Ongkosuwito, E.M.; Suttorp, C.M.; Bronkhorst, E.M.; Breuning, K.H.

    2017-01-01

    INTRODUCTION: Accurate articulation of the digital dental casts is crucial in orthodontic diagnosis and treatment planning. We aimed to determine the accuracy of manufacturing digital dental casts from computed tomography scanning of plaster casts regarding linear dimensions and interarch

  1. SLAC linear collider and a few ideas on future linear colliders

    Loew, G.A.

    1984-04-01

    This paper comes in two parts. The first part is a progress report on the SLAC Linear Collider (SLC) with emphasis on those systems which are of special interest to linear accelerator designers; it sets the stage for a number of contributed papers on specific topics which are also presented at this conference. The second part presents some ideas which are of interest to the design of future linear colliders of higher energies

  2. Improving calibration accuracy in gel dosimetry

    Oldham, M.; McJury, M.; Webb, S.; Baustert, I.B.; Leach, M.O.

    1998-01-01

    A new method of calibrating gel dosimeters (applicable to both Fricke and polyacrylamide gels) is presented which has intrinsically higher accuracy than current methods, and requires less gel. Two test-tubes of gel (inner diameter 2.5 cm, length 20 cm) are irradiated separately with a 10x10cm 2 field end-on in a water bath, such that the characteristic depth-dose curve is recorded in the gel. The calibration is then determined by fitting the depth-dose measured in water, against the measured change in relaxivity with depth in the gel. Increased accuracy is achieved in this simple depth-dose geometry by averaging the relaxivity at each depth. A large number of calibration data points, each with relatively high accuracy, are obtained. Calibration data over the full range of dose (1.6-10 Gy) is obtained by irradiating one test-tube to 10 Gy at dose maximum (D max ), and the other to 4.5 Gy at D max . The new calibration method is compared with a 'standard method' where five identical test-tubes of gel were irradiated to different known doses between 2 and 10 Gy. The percentage uncertainties in the slope and intercept of the calibration fit are found to be lower with the new method by a factor of about 4 and 10 respectively, when compared with the standard method and with published values. The gel was found to respond linearly within the error bars up to doses of 7 Gy, with a slope of 0.233±0.001 s -1 Gy -1 and an intercept of 1.106±0.005 Gy. For higher doses, nonlinear behaviour was observed. (author)

  3. ACCURACY DIMENSIONS IN REMOTE SENSING

    Á. Barsi

    2018-04-01

    Full Text Available The technological developments in remote sensing (RS during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications. Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS, which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users’ needs. The present paper gives the theoretic overview of the issue, besides

  4. Accuracy Dimensions in Remote Sensing

    Barsi, Á.; Kugler, Zs.; László, I.; Szabó, Gy.; Abdulmutalib, H. M.

    2018-04-01

    The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications. Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users' needs. The present paper gives the theoretic overview of the issue, besides selected, practice

  5. Diagnostic accuracy in virtual dermatopathology

    Mooney, E.; Kempf, W.; Jemec, G.B.E.

    2012-01-01

    Background Virtual microscopy is used for teaching medical students and residents and for in-training and certification examinations in the United States. However, no existing studies compare diagnostic accuracy using virtual slides and photomicrographs. The objective of this study was to compare...... diagnostic accuracy of dermatopathologists and pathologists using photomicrographs vs. digitized images, through a self-assessment examination, and to elucidate assessment of virtual dermatopathology. Methods Forty-five dermatopathologists and pathologists received a randomized combination of 15 virtual...... slides and photomicrographs with corresponding clinical photographs and information in a self-assessment examination format. Descriptive data analysis and comparison of groups were performed using a chi-square test. Results Diagnostic accuracy in dermatopathology using virtual dermatopathology...

  6. Further linear algebra

    Blyth, T S

    2002-01-01

    Most of the introductory courses on linear algebra develop the basic theory of finite­ dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num­ ber of illustrative and worked examples, as well as many exercises that are strategi­ cally placed throughout the text. Solutions to the ex...

  7. Behavioral modeling of the dominant dynamics in input-output transfer of linear(ized) circuits

    Beelen, T.G.J.; Maten, ter E.J.W.; Sihaloho, H.J.; Eijndhoven, van S.J.L.

    2010-01-01

    We present a powerful procedure for determining both the dominant dynamics of the inputoutput transfer and the corresponding most influential circuit parameters of a linear(ized) circuit. The procedure consists of several steps in which a specific (sub)problem is solved and its solution is used in

  8. Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.

    Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray

    2017-07-11

    Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.

  9. Accuracy and repeatability of anthropometric facial measurements using cone beam computed tomography

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Peter O.; Ren, Yijin

    Objective: The purpose of this study was to determine the accuracy and repeatability of linear anthropometric measurements on the soft tissue surface model generated from cone beam computed tomography scans. Materials and Methods: The study sample consisted of seven cadaver heads. The accuracy and

  10. Linear mass reflectron

    Mamyrin, B.A.; Shmikk, D.V.

    1979-01-01

    A description and operating principle of a linear mass reflectron with V-form trajectory of ion motion -a new non-magnetic time-of-flight mass spectrometer with high resolution are presented. The ion-optical system of the device consists of an ion source with ionization by electron shock, of accelerating gaps, reflector gaps, a drift space and ion detector. Ions move in the linear mass refraction along the trajectories parallel to the axis of the analyzer chamber. The results of investigations into the experimental device are given. With an ion drift length of 0.6 m the device resolution is 1200 with respect to the peak width at half-height. Small-sized mass spectrometric transducers with high resolution and sensitivity may be designed on the base of the linear mass reflectron principle

  11. Applied linear algebra

    Olver, Peter J

    2018-01-01

    This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...

  12. Theory of linear operations

    Banach, S

    1987-01-01

    This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.

  13. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  14. Linear Colliders TESLA

    Anon.

    1994-01-01

    The aim of the TESLA (TeV Superconducting Linear Accelerator) collaboration (at present 19 institutions from seven countries) is to establish the technology for a high energy electron-positron linear collider using superconducting radiofrequency cavities to accelerate its beams. Another basic goal is to demonstrate that such a collider can meet its performance goals in a cost effective manner. For this the TESLA collaboration is preparing a 500 MeV superconducting linear test accelerator at the DESY Laboratory in Hamburg. This TTF (TESLA Test Facility) consists of four cryomodules, each approximately 12 m long and containing eight 9-cell solid niobium cavities operating at a frequency of 1.3 GHz

  15. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  16. Recognizing depression in patients with Parkinson’s disease: accuracy and specificity of two depression rating scale Reconhecimento de depressão em pacientes com doença de Parkinson: acurácia e especificidade de duas escalas de avaliação de depressão

    Cláudia Débora Silberman

    2006-06-01

    Full Text Available This study aimed to find cut-off scores for the Montgomery-Asberg rating scale (MADRS and the Beck depression inventory (BDI that can relate to specific clinical diagnoses of depression in Parkinson´s disease (PD. Mild and moderate PD patients (n=46 were evaluated for depression according to the DSM IV criteria. All patients were assessed with the MADRS and the BDI. A "receiver operating characteristics" (ROC curve was obtained and the sensibility, specificity, positive and the negative predictive values were calculated for different cut-off scores of the MADRS and the BDI. The Kappa statistic was calculated for different cut-off scores to assess the agreement between the clinical judgment and both scales. Depression was present in 18 patients. MADRS cut-off scores of 6 and 10 showed Kappa 0.5 and 0.56, respectively. Specificity of cut-off score of 6 was 78.6% and of cut-off score of 10 was 96.4%. Kappa agreement of BDI cut-off scores of 10 and 18 were 0.36 and 0.62, respectively. Specificity was 60.7% for 10 and 92.9% for 18. Both rating scales show similar accuracy within the ROC curves (84.3% for MADRS and 79.7% for BDI. The MADRS and the BDI show a good accuracy and correlation to the clinical diagnosis when a cut-off score of 10 is used to MADRS and a cut-off score of 18 is used to BDI to recognize depression in mild to moderate PD patients. This may help clinicians to recognize depression in PD.Este estudo objetivou encontrar pontos de corte da escala de depressão de Montgomery-Asberg (MADRS e inventário de depressão de Beck (IDB que possam estar relacionados ao diagnóstico clínico específico de depressão na doença de Parkinson (DP. Os pacientes com DP leve e moderada (n= 46 foram avaliados para depressão de acordo com os critérios diagnósticos da DSM-IV. MADRS e IDB foram aplicadas em todos os pacientes. Uma curva "receiver operating characteristics" (ROC foi obtida calculando-se sensibilidade, especificidade, valores

  17. Linearly Adjustable International Portfolios

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  18. Linearly Adjustable International Portfolios

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-01-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  19. Linear induction motor

    Barkman, W.E.; Adams, W.Q.; Berrier, B.R.

    1978-01-01

    A linear induction motor has been operated on a test bed with a feedback pulse resolution of 5 nm (0.2 μin). Slewing tests with this slide drive have shown positioning errors less than or equal to 33 nm (1.3 μin) at feedrates between 0 and 25.4 mm/min (0-1 ipm). A 0.86-m (34-in)-stroke linear motor is being investigated, using the SPACO machine as a test bed. Initial results were encouraging, and work is continuing to optimize the servosystem compensation

  20. Handbook of linear algebra

    Hogben, Leslie

    2013-01-01

    With a substantial amount of new material, the Handbook of Linear Algebra, Second Edition provides comprehensive coverage of linear algebra concepts, applications, and computational software packages in an easy-to-use format. It guides you from the very elementary aspects of the subject to the frontiers of current research. Along with revisions and updates throughout, the second edition of this bestseller includes 20 new chapters.New to the Second EditionSeparate chapters on Schur complements, additional types of canonical forms, tensors, matrix polynomials, matrix equations, special types of

  1. Linear Algebra Thoroughly Explained

    Vujičić, Milan

    2008-01-01

    Linear Algebra Thoroughly Explained provides a comprehensive introduction to the subject suitable for adoption as a self-contained text for courses at undergraduate and postgraduate level. The clear and comprehensive presentation of the basic theory is illustrated throughout with an abundance of worked examples. The book is written for teachers and students of linear algebra at all levels and across mathematics and the applied sciences, particularly physics and engineering. It will also be an invaluable addition to research libraries as a comprehensive resource book for the subject.

  2. Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification.

    Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong

    2017-01-01

    This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  3. Health physics problems encountered in the Saclay linear accelerator

    Delsaut, R.

    1979-01-01

    The safety and health physics problems specific to the Saclay linear accelerator are presented: activation (of gases, dust, water, structural materials, targets); individual dosimetry; the safety engineering [fr

  4. Classification Accuracy Is Not Enough

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically...

  5. The hidden KPI registration accuracy.

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually.

  6. An alternative test for verifying electronic balance linearity

    Thomas, I.R.

    1998-02-01

    This paper presents an alternative method for verifying electronic balance linearity and accuracy. This method is being developed for safeguards weighings (weighings for the control and accountability of nuclear material) at the Idaho National Engineering and Environmental Laboratory (INEEL). With regard to balance linearity and accuracy, DOE Order 5633.3B, Control and Accountability of Nuclear Materials, Paragraph 2, 4, e, (1), (a) Scales and Balances Program, states: ''All scales and balances used for accountability purposes shall be maintained in good working condition, recalibrated according to an established schedule, and checked for accuracy and linearity on each day that the scale or balance is used for accountability purposes.'' Various tests have been proposed for testing accuracy and linearity. In the 1991 Measurement Science Conference, Dr. Walter E. Kupper presented a paper entitled: ''Validation of High Accuracy Weighing Equipment.'' Dr. Kupper emphasized that tolerance checks for calibrated, state-of-the-art electronic equipment need not be complicated, and he presented four easy steps for verifying that a calibrated balance is operating correctly. These tests evaluate the standard deviation of successive weighings (of the same load), the off-center error, the calibration error, and the error due to nonlinearity. This method of balance validation is undoubtedly an authoritative means of ensuring balance operability, yet it could have two drawbacks: one, the test for linearity is not intuitively obvious, especially from a statistical viewpoint; and two, there is an absence of definitively defined testing limits. Hence, this paper describes an alternative means of verifying electronic balance linearity and accuracy that is being developed for safeguards measurements at the INEEL

  7. Correlated Levy Noise in Linear Dynamical Systems

    Srokowski, T.

    2011-01-01

    Linear dynamical systems, driven by a non-white noise which has the Levy distribution, are analysed. Noise is modelled by a specific stochastic process which is defined by the Langevin equation with a linear force and the Levy distributed symmetric white noise. Correlation properties of the process are discussed. The Fokker-Planck equation driven by that noise is solved. Distributions have the Levy shape and their width, for a given time, is smaller than for processes in the white noise limit. Applicability of the adiabatic approximation in the case of the linear force is discussed. (author)

  8. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    Vaz, Carlos Eduardo Sanches; Camargo, Olavo Pires de; Santana, Paulo Jose de; Valezi, Antonio Carlos

    2005-01-01

    Purpose: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. Method: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage). Results: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. Conclusion: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is unable to clearly

  9. Accuracy of magnetic resonance in identifying traumatic intraarticular knee lesions

    Vaz Carlos Eduardo Sanches

    2005-01-01

    Full Text Available PURPOSE: To evaluate the diagnostic accuracy of magnetic resonance imaging of the knee in identifying traumatic intraarticular knee lesions. METHOD: 300 patients with a clinical diagnosis of traumatic intraarticular knee lesions underwent prearthoscopic magnetic resonance imaging. The sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive test, likelihood ratio for a negative test, and accuracy of magnetic resonance imaging were calculated relative to the findings during arthroscopy in the studied structures of the knee (medial meniscus, lateral meniscus, anterior cruciate ligament, posterior cruciate ligament, and articular cartilage. RESULTS: Magnetic resonance imaging produced the following results regarding detection of lesions: medial meniscus: sensitivity 97.5%, specificity 92.9%, positive predictive value 93.9%, positive negative value 97%, likelihood positive ratio 13.7, likelihood negative ratio 0.02, and accuracy 95.3%; lateral meniscus: sensitivity 91.9%, specificity 93.6%, positive predictive value 92.7%, positive negative value 92.9%, likelihood positive ratio 14.3, likelihood negative ratio 0.08, and accuracy 93.6%; anterior cruciate ligament: sensitivity 99.0%, specificity 95.9%, positive predictive value 91.9%, positive negative value 99.5%, likelihood positive ratio 21.5, likelihood negative ratio 0.01, and accuracy 96.6%; posterior cruciate ligament: sensitivity 100%, specificity 99%, positive predictive value 80.0%, positive negative value 100%, likelihood positive ratio 100, likelihood negative ratio 0.01, and accuracy 99.6%; articular cartilage: sensitivity 76.1%, specificity 94.9%, positive predictive value 94.7%, positive negative value 76.9%, likelihood positive ratio 14.9, likelihood negative ratio 0.25, and accuracy 84.6%. CONCLUSION: Magnetic resonance imaging is a satisfactory diagnostic tool for evaluating meniscal and ligamentous lesions of the knee, but it is

  10. America, Linearly Cyclical

    2013-05-10

    AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought

  11. Stanford's linear collider

    Southworth, B.

    1985-01-01

    The peak of the construction phase of the Stanford Linear Collider, SLC, to achieve 50 GeV electron-positron collisions has now been passed. The work remains on schedule to attempt colliding beams, initially at comparatively low luminosity, early in 1987. (orig./HSI).

  12. Dosimetry of linear sources

    Mafra Neto, F.

    1992-01-01

    The dose of gamma radiation from a linear source of cesium 137 is obtained, presenting two difficulties: oblique filtration of radiation when cross the platinum wall, in different directions, and dose connection due to the scattering by the material mean of propagation. (C.G.C.)

  13. Resistors Improve Ramp Linearity

    Kleinberg, L. L.

    1982-01-01

    Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.

  14. LINEAR COLLIDERS: 1992 workshop

    Settles, Ron; Coignet, Guy

    1992-01-01

    As work on designs for future electron-positron linear colliders pushes ahead at major Laboratories throughout the world in a major international collaboration framework, the LC92 workshop held in Garmisch Partenkirchen this summer, attended by 200 machine and particle physicists, provided a timely focus

  15. Linear genetic programming

    Brameier, Markus

    2007-01-01

    Presents a variant of Genetic Programming that evolves imperative computer programs as linear sequences of instructions, in contrast to the more traditional functional expressions or syntax trees. This book serves as a reference for researchers, but also contains sufficient introduction for students and those who are new to the field

  16. On Solving Linear Recurrences

    Dobbs, David E.

    2013-01-01

    A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.

  17. Review of linear colliders

    Takeda, Seishi

    1992-01-01

    The status of R and D of future e + e - linear colliders proposed by the institutions throughout the world is described including the JLC, NLC, VLEPP, CLIC, DESY/THD and TESLA projects. The parameters and RF sources are discussed. (G.P.) 36 refs.; 1 tab

  18. Factors affecting GEBV accuracy with single-step Bayesian models.

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  19. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  20. Accuracy of computer-assisted cervicle pedicle screw installation

    Zhang Honglei; Zhou Dongsheng; Jang Zhensong

    2009-01-01

    Objective: To investigate the accuracy of computer-assisted cervical pedicle screw installation and the reason of screw malposition. Methods: A total of 172 cervical pedicle screws were installed by computer-assisted navigation for 30 patients with lower cervical spinal diseases. All the patients were examined by X-ray and CT after operation. Screw's position and direction were measured on the sagittal and transectional images of intraoperative navigation and post-operative CT. Then linear regression analysis was taken between navigational and post-operative CT's images. Results: Two screws perforated the upper pedicle wall, 3 perforated the lateral pedicle wall.There was a positive linear correlation between navigational and post-operative CT's images. Conclusion: Computer-assisted navigation can provide the high accuracy of cervical pedicle screw installation and excursion phenomenon is reason of screw malposition. (authors)

  1. Regularized Label Relaxation Linear Regression.

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  2. Accuracy of abdominal auscultation for bowel obstruction.

    Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge

    2015-09-14

    To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann(®) Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds.

  3. Matter power spectrum and the challenge of percent accuracy

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2015-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day $N$-body methods, identifying main potential error sources from the set-up of initial conditions to...

  4. Finite-dimensional linear algebra

    Gockenbach, Mark S

    2010-01-01

    Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq

  5. Accuracy of genomic selection in European maize elite breeding populations.

    Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C

    2012-03-01

    Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.

  6. Analysis of spatial distribution of land cover maps accuracy

    Khatami, R.; Mountrakis, G.; Stehman, S. V.

    2017-12-01

    Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain

  7. Noiseless Vlasov–Poisson simulations with linearly transformed particles

    Campos Pinto, Martin, E-mail: campos@ann.jussieu.fr [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris (France); Sonnendrücker, Eric, E-mail: sonnen@math.unistra.fr [IRMA, UMR 7501, Université de Strasbourg and CNRS, 7 rue René Descartes, F-67084 Strasbourg Cedex (France); Project-team CALVI, INRIA Nancy Grand Est, 7 rue René Descartes, F-67084 Strasbourg Cedex (France); Friedman, Alex, E-mail: af@llnl.gov [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Grote, David P., E-mail: grote1@llnl.gov [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lund, Steve M., E-mail: smlund@llnl.gov [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-10-15

    We introduce a deterministic discrete-particle simulation approach, the Linearly-Transformed Particle-In-Cell (LTPIC) method, that employs linear deformations of the particles to reduce the noise traditionally associated with particle schemes. Formally, transforming the particles is justified by local first order expansions of the characteristic flow in phase space. In practice the method amounts of using deformation matrices within the particle shape functions; these matrices are updated via local evaluations of the forward numerical flow. Because it is necessary to periodically remap the particles on a regular grid to avoid excessively deforming their shapes, the method can be seen as a development of Denavit's Forward Semi-Lagrangian (FSL) scheme (Denavit, 1972 [8]). However, it has recently been established (Campos Pinto, 2012 [20]) that the underlying Linearly-Transformed Particle scheme converges for abstract transport problems, with no need to remap the particles; deforming the particles can thus be seen as a way to significantly lower the remapping frequency needed in the FSL schemes, and hence the associated numerical diffusion. To couple the method with electrostatic field solvers, two specific charge deposition schemes are examined, and their performance compared with that of the standard deposition method. Finally, numerical 1d1v simulations involving benchmark test cases and halo formation in an initially mismatched thermal sheet beam demonstrate some advantages of our LTPIC scheme over the classical PIC and FSL methods. Benchmarked test cases also indicate that, for numerical choices involving similar computational effort, the LTPIC method is capable of accuracy comparable to or exceeding that of state-of-the-art, high-resolution Vlasov schemes.

  8. Identification of Influential Points in a Linear Regression Model

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  9. Non Linear Beam Dynamics Studies at SPEAR

    Terebilo, A.; Pellegrini, C.; Cornacchia, M.; Corbett, J.; Martin, D.

    2011-01-01

    The frequency map analysis of a Hamiltonian system recently introduced to accelerators physics in combination with turn-by-turn phase space measurements opens new experimental opportunities for studying non linear dynamic in storage rings. In this paper we report on the experimental program at SPEAR having the goal of measuring the frequency map of the machine. In this paper we discuss the accuracy of the instantaneous tune extraction from experimental data and demonstrate the possibility of the frequency map measurement. The instantaneous tune extraction technique can be applied to experimental tracking data with reasonable accuracy. Frequency map can be experimentally determined using the existing turn-by-turn phase space measurement techniques and NAFF instantaneous tune extraction.

  10. Linearity and Non-linearity of Photorefractive effect in Materials ...

    In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...

  11. Linear Water Waves

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

  12. The International Linear Collider

    List Benno

    2014-04-01

    Full Text Available The International Linear Collider (ILC is a proposed e+e− linear collider with a centre-of-mass energy of 200–500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  13. The International Linear Collider

    List, Benno

    2014-04-01

    The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  14. Dimension of linear models

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...

  15. Reciprocating linear motor

    Goldowsky, Michael P. (Inventor)

    1987-01-01

    A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.

  16. Duality in linearized gravity

    Henneaux, Marc; Teitelboim, Claudio

    2005-01-01

    We show that duality transformations of linearized gravity in four dimensions, i.e., rotations of the linearized Riemann tensor and its dual into each other, can be extended to the dynamical fields of the theory so as to be symmetries of the action and not just symmetries of the equations of motion. Our approach relies on the introduction of two superpotentials, one for the spatial components of the spin-2 field and the other for their canonically conjugate momenta. These superpotentials are two-index, symmetric tensors. They can be taken to be the basic dynamical fields and appear locally in the action. They are simply rotated into each other under duality. In terms of the superpotentials, the canonical generator of duality rotations is found to have a Chern-Simons-like structure, as in the Maxwell case

  17. The SLAC linear collider

    Phinney, N.

    1992-01-01

    The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs

  18. Linear waves and instabilities

    Bers, A.

    1975-01-01

    The electrodynamic equations for small-amplitude waves and their dispersion relation in a homogeneous plasma are outlined. For such waves, energy and momentum, and their flow and transformation, are described. Perturbation theory of waves is treated and applied to linear coupling of waves, and the resulting instabilities from such interactions between active and passive waves. Linear stability analysis in time and space is described where the time-asymptotic, time-space Green's function for an arbitrary dispersion relation is developed. The perturbation theory of waves is applied to nonlinear coupling, with particular emphasis on pump-driven interactions of waves. Details of the time--space evolution of instabilities due to coupling are given. (U.S.)

  19. Extended linear chain compounds

    Linear chain substances span a large cross section of contemporary chemistry ranging from covalent polymers, to organic charge transfer com­ plexes to nonstoichiometric transition metal coordination complexes. Their commonality, which coalesced intense interest in the theoretical and exper­ imental solid state physics/chemistry communities, was based on the obser­ vation that these inorganic and organic polymeric substrates exhibit striking metal-like electrical and optical properties. Exploitation and extension of these systems has led to the systematic study of both the chemistry and physics of highly and poorly conducting linear chain substances. To gain a salient understanding of these complex materials rich in anomalous aniso­ tropic electrical, optical, magnetic, and mechanical properties, the conver­ gence of diverse skills and talents was required. The constructive blending of traditionally segregated disciplines such as synthetic and physical organic, inorganic, and polymer chemistry, crystallog...

  20. Dosimetric accuracy at low monitor unit setting in electron beams at different dose rates

    Ravikumar, M.; Ravichandran, R.; Supe, Sanjay S.; Sharma, Anil K.

    1999-01-01

    As electron beam with low monitor unit (LMU) settings are used in some dosimetric studies, better understanding of accuracy in dose delivery at LMU setting is necessary. The dose measurements were carried out with 0.6 cm 3 farmer type ion chamber at d max in a polystyrene phantom. Measurements at different MUs show that the dose linearity ratio (DLR) increases as the MU setting decreases below 20 MU and DLRs are found to increase when the incident electron beams have higher energies. The increase in DLR is minimum for low dose rate setting for all five electron beam energies (6, 9, 12, 16 and 20 MeV). As the variation in dose delivery is machine-specific, a detailed study should be conducted before the low MU setting is implemented. Since errors in dose delivery are high at higher dose rates, low dose rate may be better at low MU unit setting. (author)

  1. Non-linear osmosis

    Diamond, Jared M.

    1966-01-01

    1. The relation between osmotic gradient and rate of osmotic water flow has been measured in rabbit gall-bladder by a gravimetric procedure and by a rapid method based on streaming potentials. Streaming potentials were directly proportional to gravimetrically measured water fluxes. 2. As in many other tissues, water flow was found to vary with gradient in a markedly non-linear fashion. There was no consistent relation between the water permeability and either the direction or the rate of water flow. 3. Water flow in response to a given gradient decreased at higher osmolarities. The resistance to water flow increased linearly with osmolarity over the range 186-825 m-osM. 4. The resistance to water flow was the same when the gall-bladder separated any two bathing solutions with the same average osmolarity, regardless of the magnitude of the gradient. In other words, the rate of water flow is given by the expression (Om — Os)/[Ro′ + ½k′ (Om + Os)], where Ro′ and k′ are constants and Om and Os are the bathing solution osmolarities. 5. Of the theories advanced to explain non-linear osmosis in other tissues, flow-induced membrane deformations, unstirred layers, asymmetrical series-membrane effects, and non-osmotic effects of solutes could not explain the results. However, experimental measurements of water permeability as a function of osmolarity permitted quantitative reconstruction of the observed water flow—osmotic gradient curves. Hence non-linear osmosis in rabbit gall-bladder is due to a decrease in water permeability with increasing osmolarity. 6. The results suggest that aqueous channels in the cell membrane behave as osmometers, shrinking in concentrated solutions of impermeant molecules and thereby increasing membrane resistance to water flow. A mathematical formulation of such a membrane structure is offered. PMID:5945254

  2. Fundamentals of linear algebra

    Dash, Rajani Ballav

    2008-01-01

    FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.

  3. Linear network theory

    Sander, K F

    1964-01-01

    Linear Network Theory covers the significant algebraic aspect of network theory, with minimal reference to practical circuits. The book begins the presentation of network analysis with the exposition of networks containing resistances only, and follows it up with a discussion of networks involving inductance and capacity by way of the differential equations. Classification and description of certain networks, equivalent networks, filter circuits, and network functions are also covered. Electrical engineers, technicians, electronics engineers, electricians, and students learning the intricacies

  4. Non linear viscoelastic models

    Agerkvist, Finn T.

    2011-01-01

    Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....

  5. Relativistic Linear Restoring Force

    Clark, D.; Franklin, J.; Mann, N.

    2012-01-01

    We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…

  6. Superconducting linear colliders

    Anon.

    1990-01-01

    The advantages of superconducting radiofrequency (SRF) for particle accelerators have been demonstrated by successful operation of systems in the TRISTAN and LEP electron-positron collider rings respectively at the Japanese KEK Laboratory and at CERN. If performance continues to improve and costs can be lowered, this would open an attractive option for a high luminosity TeV (1000 GeV) linear collider

  7. Perturbed asymptotically linear problems

    Bartolo, R.; Candela, A. M.; Salvatore, A.

    2012-01-01

    The aim of this paper is investigating the existence of solutions of some semilinear elliptic problems on open bounded domains when the nonlinearity is subcritical and asymptotically linear at infinity and there is a perturbation term which is just continuous. Also in the case when the problem has not a variational structure, suitable procedures and estimates allow us to prove that the number of distinct crtitical levels of the functional associated to the unperturbed problem is "stable" unde...

  8. Miniature linear cooler development

    Pruitt, G.R.

    1993-01-01

    An overview is presented of the status of a family of miniature linear coolers currently under development by Hughes Aircraft Co. for use in hand held, volume limited or power limited infrared applications. These coolers, representing the latest additions to the Hughes family of TOP trademark [twin-opposed piston] linear coolers, have been fabricated and tested in three different configurations. Each configuration is designed to utilize a common compressor assembly resulting in reduced manufacturing costs. The baseline compressor has been integrated with two different expander configurations and has been operated with two different levels of input power. These various configuration combinations offer a wide range of performance and interface characteristics which may be tailored to applications requiring limited power and size without significantly compromising cooler capacity or cooldown characteristics. Key cooler characteristics and test data are summarized for three combinations of cooler configurations which are representative of the versatility of this linear cooler design. Configurations reviewed include the shortened coldfinger [1.50 to 1.75 inches long], limited input power [less than 17 Watts] for low power availability applications; the shortened coldfinger with higher input power for lightweight, higher performance applications; and coldfingers compatible with DoD 0.4 Watt Common Module coolers for wider range retrofit capability. Typical weight of these miniature linear coolers is less than 500 grams for the compressor, expander and interconnecting transfer line. Cooling capacity at 80K at room ambient conditions ranges from 400 mW to greater than 550 mW. Steady state power requirements for maintaining a heat load of 150 mW at 80K has been shown to be less than 8 Watts. Ongoing reliability growth testing is summarized including a review of the latest test article results

  9. Linear pneumatic actuator

    Avram Mihai

    2017-01-01

    Full Text Available The paper presents a linear pneumatic actuator with short working stroke. It consists of a pneumatic motor (a simple stroke cylinder or a membrane chamber, two 2/2 pneumatic distributors “all or nothing” electrically commanded for controlling the intake/outtake flow to/from the active chamber of the motor, a position transducer and a microcontroller. There is also presented the theoretical analysis (mathematical modelling and numerical simulation accomplished.

  10. Linear pneumatic actuator

    Avram Mihai; Niţu Constantin; Bucşan Constantin; Grămescu Bogdan

    2017-01-01

    The paper presents a linear pneumatic actuator with short working stroke. It consists of a pneumatic motor (a simple stroke cylinder or a membrane chamber), two 2/2 pneumatic distributors “all or nothing” electrically commanded for controlling the intake/outtake flow to/from the active chamber of the motor, a position transducer and a microcontroller. There is also presented the theoretical analysis (mathematical modelling and numerical simulation) accomplished.

  11. Linear MHD equilibria

    Scheffel, J.

    1984-03-01

    The linear Grad-Shafranov equation for a toroidal, axisymmetric plasma is solved analytically. Exact solutions are given in terms of confluent hyper-geometric functions. As an alternative, simple and accurate WKBJ solutions are presented. With parabolic pressure profiles, both hollow and peaked toroidal current density profiles are obtained. As an example the equilibrium of a z-pinch with a square-shaped cross section is derived.(author)

  12. Linear induction accelerator

    Buttram, M.T.; Ginn, J.W.

    1988-06-21

    A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.

  13. Geometric modeling in the problem of ball bearing accuracy

    Glukhov, V. I.; Pushkarev, V. V.; Khomchenko, V. G.

    2017-06-01

    The manufacturing quality of ball bearings is an urgent problem for machine-building industry. The aim of the research is to improve the geometric specifications accuracy of bearings based on evidence-based systematic approach and the method of adequate size, location and form deviations modeling of the rings and assembled ball bearings. The present work addressed the problem of bearing geometric specifications identification and the study of these specifications. The deviation from symmetric planar of rings and bearings assembly and mounting width are among these specifications. A systematic approach to geometric specifications values and ball bearings tolerances normalization in coordinate systems will improve the quality of bearings by optimizing and minimizing the number of specifications. The introduction of systematic approach to the international standards on rolling bearings is a guarantee of a significant increase in accuracy of bearings and the quality of products where they are applied.

  14. Integration of Attributes from Non-Linear Characterization of Cardiovascular Time-Series for Prediction of Defibrillation Outcomes.

    Sharad Shandilya

    Full Text Available The timing of defibrillation is mostly at arbitrary intervals during cardio-pulmonary resuscitation (CPR, rather than during intervals when the out-of-hospital cardiac arrest (OOH-CA patient is physiologically primed for successful countershock. Interruptions to CPR may negatively impact defibrillation success. Multiple defibrillations can be associated with decreased post-resuscitation myocardial function. We hypothesize that a more complete picture of the cardiovascular system can be gained through non-linear dynamics and integration of multiple physiologic measures from biomedical signals.Retrospective analysis of 153 anonymized OOH-CA patients who received at least one defibrillation for ventricular fibrillation (VF was undertaken. A machine learning model, termed Multiple Domain Integrative (MDI model, was developed to predict defibrillation success. We explore the rationale for non-linear dynamics and statistically validate heuristics involved in feature extraction for model development. Performance of MDI is then compared to the amplitude spectrum area (AMSA technique.358 defibrillations were evaluated (218 unsuccessful and 140 successful. Non-linear properties (Lyapunov exponent > 0 of the ECG signals indicate a chaotic nature and validate the use of novel non-linear dynamic methods for feature extraction. Classification using MDI yielded ROC-AUC of 83.2% and accuracy of 78.8%, for the model built with ECG data only. Utilizing 10-fold cross-validation, at 80% specificity level, MDI (74% sensitivity outperformed AMSA (53.6% sensitivity. At 90% specificity level, MDI had 68.4% sensitivity while AMSA had 43.3% sensitivity. Integrating available end-tidal carbon dioxide features into MDI, for the available 48 defibrillations, boosted ROC-AUC to 93.8% and accuracy to 83.3% at 80% sensitivity.At clinically relevant sensitivity thresholds, the MDI provides improved performance as compared to AMSA, yielding fewer unsuccessful defibrillations

  15. Linear algebraic groups

    Springer, T A

    1998-01-01

    "[The first] ten chapters...are an efficient, accessible, and self-contained introduction to affine algebraic groups over an algebraically closed field. The author includes exercises and the book is certainly usable by graduate students as a text or for self-study...the author [has a] student-friendly style… [The following] seven chapters... would also be a good introduction to rationality issues for algebraic groups. A number of results from the literature…appear for the first time in a text." –Mathematical Reviews (Review of the Second Edition) "This book is a completely new version of the first edition. The aim of the old book was to present the theory of linear algebraic groups over an algebraically closed field. Reading that book, many people entered the research field of linear algebraic groups. The present book has a wider scope. Its aim is to treat the theory of linear algebraic groups over arbitrary fields. Again, the author keeps the treatment of prerequisites self-contained. The material of t...

  16. Quantum linear Boltzmann equation

    Vacchini, Bassano; Hornberger, Klaus

    2009-01-01

    We review the quantum version of the linear Boltzmann equation, which describes in a non-perturbative fashion, by means of scattering theory, how the quantum motion of a single test particle is affected by collisions with an ideal background gas. A heuristic derivation of this Lindblad master equation is presented, based on the requirement of translation-covariance and on the relation to the classical linear Boltzmann equation. After analyzing its general symmetry properties and the associated relaxation dynamics, we discuss a quantum Monte Carlo method for its numerical solution. We then review important limiting forms of the quantum linear Boltzmann equation, such as the case of quantum Brownian motion and pure collisional decoherence, as well as the application to matter wave optics. Finally, we point to the incorporation of quantum degeneracies and self-interactions in the gas by relating the equation to the dynamic structure factor of the ambient medium, and we provide an extension of the equation to include internal degrees of freedom.

  17. The Stanford Linear Collider

    Emma, P.

    1995-01-01

    The Stanford Linear Collider (SLC) is the first and only high-energy e + e - linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e - ) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z 0 boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10 30 cm -2 s -1 and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed

  18. Optical linear algebra processors - Architectures and algorithms

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  19. Remote handling dynamical modelling: assessment on new approach to enhance positioning accuracy with heavy load manipulation

    Gagarina-Sasia, T.; David, O.; Dubus, G.; Perrot, Y.; Riwain, A.

    2007-01-01

    In vessel maintenance work in Fusion Tokamak will be carried out with help several sets of robotic devices. Heavy loads handling in constrained space is identified by all players of the RH community as a key-issue in the latest Fusion Tokamak facilities. To deal with high-level dexterity tasks, high payload to mass ratio and limited operating space, RH equipment designers can only propose systems whose mechanical flexibility is no longer negligible and need to be taken into account in the control scheme. Traditional approaches where control system only includes a linear model of deformation of the structure leads to poor positioning accuracy. Uncontrolled or under evaluated errors could be damaging for in-vessel components during maintenance operations in the Tokamak facility. To address the control of complex flexible systems, we will investigate the use of specific mechanical software that combines both finite element and kinematical joints analyses, with a strong-coupled formulation, to perform system dynamics simulations. This procedure will be applied on a single axis mock up robotic joint with highly flexible structure. A comparison of experimental results with the traditional linear approach and the specified software model will be carried out. Benefits introduced by this new approach will finally be assessed in view of RH design or specification in the field of RH in Fusion Tokamak scale such as ITER. (orig.)

  20. Radiation load of workers on linear accelerators

    Kralik, G.; Kantova, Z.; Fribertova, M.; Kontrisova, K.

    2014-01-01

    Burden of health care personnel working on linear accelerators. New examination and treatment methods enable to reduce the number of health care personnel even in the case of increased numbers of examined patients. However, still open is the question of determining the effective dose delivered to health care personnel. The employment of several methods of evaluation of received dose at one workplace makes it possible to compare the accuracy and reliability of the respective types of measuring devices, as well as to point out the pitfalls of their use.At the St. Elizabeth Cancer Institute we compared the results of measurements of TL dosimeters, and OSL dosimeters at workplaces with linear accelerators. (authors)

  1. Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing

    Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon

    2016-01-01

    In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....

  2. Picosecond, single pulse electron linear accelerator

    Kikuchi, Riichi; Kawanishi, Masaharu

    1979-01-01

    The picosecond, single pulse electron linear accelerators, are described, which were installed in the Nuclear Engineering Laboratory of the University of Tokyo and in the Nuclear Radiation Laboratory of the Osaka University. The purpose of the picosecond, single pulse electron linear accelerators is to investigate the very short time reaction of the substances, into which gamma ray or electron beam enters. When the electrons in substances receive radiation energy, the electrons get high kinetic energy, and the energy and the electric charge shift, at last to the quasi-stable state. This transient state can be experimented with these special accelerators very accurately, during picoseconds, raising the accuracy of the time of incidence of radiation and also raising the accuracy of observation time. The outline of these picosecond, single pulse electron linear accelerators of the University of Tokyo and the Osaka University, including the history, the systems and components and the output beam characteristics, are explained. For example, the maximum energy 30 -- 35 MeV, the peak current 1 -- 8 n C, the pulse width 18 -- 40 ps, the pulse repetition rate 200 -- 720 pps, the energy spectrum 1 -- 1.8% and the output beam diameter 2 -- 5 mm are shown as the output beam characteristics of the accelerators in both universities. The investigations utilizing the picosecond single pulse electron linear accelerators, such as the investigation of short life excitation state by pulsed radiation, the dosimetry study of pulsed radiation, and the investigation of the transforming mechanism and the development of the transforming technology from picosecond, single pulse electron beam to X ray, vacuum ultraviolet ray and visual ray, are described. (Nakai, Y.)

  3. Accuracy of computer-assisted orthognathic surgery.

    De Riu, Giacomo; Virdis, Paola Ilaria; Meloni, Silvio Mario; Lumbau, Aurea; Vaira, Luigi Angelo

    2018-02-01

    The purpose of this study was to retrospectively evaluate the difference between the planned and the actual movements of the jaws, using three-dimensional (3D) software for PC-assisted orthognathic surgery, to establish the accuracy of the procedure. A retrospective study was performed with 49 patients who had undergone PC-guided bimaxillary surgery. The accuracy of the protocol was determined by comparing planned movements of the jaws with the actual surgical movements, analysing frontal and lateral cephalometries. The overall results were deemed accurate, and differences among 12 of the 15 parameters were considered nonsignificant. Significant differences were reported for SNA (p = 0.008), SNB (p = 0.006), and anterior facial height (p = 0.033). The latter was significantly different in patients who had undergone genioplasty when compared with patients who had not. Virtual surgical planning presented a good degree of accuracy for most of the parameters assessed, with an average error of 1.98 mm for linear measures and 1.19° for angular measures. In general, a tendency towards under-projection in jaws was detected, probably due to imperfect condylar seating. A slight overcorrection of SNA and SNB during virtual planning (approximately 2°) could be beneficial. Further progress is required in the development of 3D simulation of the soft tissue, which currently does not allow an accurate management of the facial height and the chin position. Virtual planning cannot replace the need for constant intraoperative monitoring of the jaws' movements and real-time comparisons between planned and actual outcomes. It is therefore appropriate to leave some margin for correction of inaccuracies in the virtual planning. In this sense, it may be appropriate to use only the intermediate splint, and then use the planned occlusion and clinical measurements to guide repositioning of the second jaw and chin, respectively. Copyright © 2017 European Association for Cranio

  4. Linear and non-linear autoregressive models for short-term wind speed forecasting

    Lydia, M.; Suresh Kumar, S.; Immanuel Selvakumar, A.; Edwin Prem Kumar, G.

    2016-01-01

    Highlights: • Models for wind speed prediction at 10-min intervals up to 1 h built on time-series wind speed data. • Four different multivariate models for wind speed built based on exogenous variables. • Non-linear models built using three data mining algorithms outperform the linear models. • Autoregressive models based on wind direction perform better than other models. - Abstract: Wind speed forecasting aids in estimating the energy produced from wind farms. The soaring energy demands of the world and minimal availability of conventional energy sources have significantly increased the role of non-conventional sources of energy like solar, wind, etc. Development of models for wind speed forecasting with higher reliability and greater accuracy is the need of the hour. In this paper, models for predicting wind speed at 10-min intervals up to 1 h have been built based on linear and non-linear autoregressive moving average models with and without external variables. The autoregressive moving average models based on wind direction and annual trends have been built using data obtained from Sotavento Galicia Plc. and autoregressive moving average models based on wind direction, wind shear and temperature have been built on data obtained from Centre for Wind Energy Technology, Chennai, India. While the parameters of the linear models are obtained using the Gauss–Newton algorithm, the non-linear autoregressive models are developed using three different data mining algorithms. The accuracy of the models has been measured using three performance metrics namely, the Mean Absolute Error, Root Mean Squared Error and Mean Absolute Percentage Error.

  5. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  6. Diagnostic Accuracy of a Self-Report Measure of Patellar Tendinopathy in Youth Basketball.

    Owoeye, Oluwatoyosi B A; Wiley, J Preston; Walker, Richard E A; Palacios-Derflingher, Luz; Emery, Carolyn A

    2018-04-27

    Study Design Prospective diagnostic accuracy validation study. Background Engaging clinicians for diagnosis of patellar tendinopathy in large surveillance studies is often impracticable. A self-report measure, the Oslo Sports Research Trauma Centre patellar tendinopathy (OSTRC-P) Questionnaire, an adaptation of the OSTRC Questionnaire may provide a viable alternative. Objectives To evaluate the diagnostic accuracy of the OSTRC-P Questionnaire in detecting patellar tendinopathy in youth basketball players when compared to clinical evaluation. Methods Following the Standards for Reporting of Diagnostic Accuracy Studies guidelines, 208 youth basketball players (aged 13-18 years) were recruited. Participants completed the OSTRC-P Questionnaire (index test) prior to a clinical evaluation (reference standard) by a physiotherapist blinded to OSTRC-P Questionnaire results. Sensitivity, specificity, predictive values (PVs), likelihood ratios (LRs) and posttest probabilities were calculated. Linear regression was used to examine the association between OSTRC-P Questionnaire severity score and patellar tendinopathy severity rating during single leg decline squat (SLDS). Results The final analysis included 169 players. The OSTRC-P Questionnaire had a sensitivity of 79% (95%CI: 65%, 90%), specificity of 98% (95%CI: 94%, 100%), positive PV of 95%, negative PV of 92%, positive LR of 48 and negative LR of 0.21. The posttest probabilities were 95% and 8% given positive and negative results, respectively. A positive association was found between OSTRC-P Questionnaire and SLDS rating [(β = .08 (95%CI: .03, .12) (p = .001)]. Conclusions The OSTRC-P Questionnaire is an acceptable alternative to clinical evaluation for self-reporting patellar tendinopathy and grading its severity in settings involving youth basketball players. Level of Evidence Diagnosis, level 1b. J Orthop Sports Phys Ther, Epub 27 Apr 2018. doi:10.2519/jospt.2018.8088.

  7. Non linear microtearing modes

    Garbet, X.; Mourgues, F.; Samain, A.

    1987-01-01

    Among the various instabilities which could explain the anomalous electron heat transport observed in tokamaks during additional heating, a microtearing turbulence is a reasonable candidate since it affects directly the magnetic topology. This turbulence may be described in a proper frame rotating around the majors axis by a static potential vector. In strong non linear regimes, the flow of electrons along the stochastic field lines induces a current. The point is to know whether this current can sustain the turbulence. The mechanisms of this self-consistency, involving the combined effects of the thermal diamagnetism and of the electric drift are presented here

  8. RF linear accelerators

    Wangler, Thomas P

    2008-01-01

    Thomas P. Wangler received his B.S. degree in physics from Michigan State University, and his Ph.D. degree in physics and astronomy from the University of Wisconsin. After postdoctoral appointments at the University of Wisconsin and Brookhaven National Laboratory, he joined the staff of Argonne National Laboratory in 1966, working in the fields of experimental high-energy physics and accelerator physics. He joined the Accelerator Technology Division at Los Alamos National Laboratory in 1979, where he specialized in high-current beam physics and linear accelerator design and technology. In 2007

  9. SLAC linear collider

    Richter, B.; Bell, R.A.; Brown, K.L.

    1980-06-01

    The SLAC LINEAR COLLIDER is designed to achieve an energy of 100 GeV in the electron-positron center-of-mass system by accelerating intense bunches of particles in the SLAC linac and transporting the electron and positron bunches in a special magnet system to a point where they are focused to a radius of about 2 microns and made to collide head on. The rationale for this new type of colliding beam system is discussed, the project is described, some of the novel accelerator physics issues involved are discussed, and some of the critical technical components are described

  10. Matlab linear algebra

    Lopez, Cesar

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Linear Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. In addition to giving an introduction to

  11. Cavity characterization for general use in linear electron accelerators

    Souza Neto, M.V. de.

    1985-01-01

    The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt

  12. Direct linear driving systems; Les entrainements lineaires directs

    Favre, E.; Brunner, C.; Piaget, D. [ETEL SA (France)

    1999-11-01

    The linear motor is one of the most important developments in electrical drive technology. However, it only, began to be adopted on a large scale at the beginning of the 1990's and will not be considered a mature technology until well into the next millennium. Actuators based on linear motor technology have a number of technical advantages including high speed, high positional accuracy and fine resolution. They also require fewer component parts. Some precautions are necessary when using linear motors. Care must be taken to avoid overheating and excessive vibration, and the magnetic components must be protected.

  13. Multicollinearity in hierarchical linear models.

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Statistical monitoring of linear antenna arrays

    Harrou, Fouzi

    2016-11-03

    The paper concerns the problem of monitoring linear antenna arrays using the generalized likelihood ratio (GLR) test. When an abnormal event (fault) affects an array of antenna elements, the radiation pattern changes and significant deviation from the desired design performance specifications can resulted. In this paper, the detection of faults is addressed from a statistical point of view as a fault detection problem. Specifically, a statistical method rested on the GLR principle is used to detect potential faults in linear arrays. To assess the strength of the GLR-based monitoring scheme, three case studies involving different types of faults were performed. Simulation results clearly shown the effectiveness of the GLR-based fault-detection method to monitor the performance of linear antenna arrays.

  15. Special set linear algebra and special set fuzzy linear algebra

    Kandasamy, W. B. Vasantha; Smarandache, Florentin; Ilanthenral, K.

    2009-01-01

    The authors in this book introduce the notion of special set linear algebra and special set fuzzy Linear algebra, which is an extension of the notion set linear algebra and set fuzzy linear algebra. These concepts are best suited in the application of multi expert models and cryptology. This book has five chapters. In chapter one the basic concepts about set linear algebra is given in order to make this book a self contained one. The notion of special set linear algebra and their fuzzy analog...

  16. Investigating circular patterns in linear polarization observations of Venus

    Mahapatra, G.; Stam, D.M.; Rossi, L.C.G.; Rodenhuis, M.; Snik, Frans; Keller, C.U.

    2017-01-01

    In this work, we analyse linear polarization data of the planet at a distance, obtained with the Extreme Polarimeter (ExPo) on the William Herschel Telescope
    on La Palma. These spatially resolved, high-accuracy polarization observations of Venus show faint circular patterns centered on the

  17. Linear wind generator

    Kozarov, A.; Petrov, O.; Antonov, J.; Sotirova, S.; Petrova, B.

    2006-01-01

    The purpose of the linear wind-power generator described in this article is to decrease the following disadvantages of the common wind-powered turbine: 1) large bending and twisting moments to the blades and the shaft, especially when strong winds and turbulence exist; 2) significant values of the natural oscillation period of the construction result in the possibility of occurrence of destroying resonance oscillations; 3) high velocity of the peripheral parts of the rotor creating a danger for birds; 4) difficulties, connected with the installation and the operation on the mountain ridges and passages where the wind energy potential is the largest. The working surfaces of the generator in questions driven by the wind are not connected with a joint shaft but each moves along a railway track with few oscillations. So the sizes of each component are small and their number can be rather large. The mechanical trajectory is not a circle but a closed outline in a vertical plain, which consists of two rectilinear sectors, one above the other, connected in their ends by semi-circumferences. The mechanical energy of each component turns into electrical on the principle of the linear electrical generator. A regulation is provided when the direction of the wind is perpendicular to the route. A possibility of effectiveness is shown through aiming of additional quantities of air to the movable components by static barriers

  18. Accuracy requirements in radiotherapy treatment planning

    Buzdar, S. A.; Afzal, M.; Nazir, A.; Gadhi, M. A.

    2013-01-01

    Radiation therapy attempts to deliver ionizing radiation to the tumour and can improve the survival chances and/or quality of life of patients. There are chances of errors and uncertainties in the entire process of radiotherapy that may affect the accuracy and precision of treatment management and decrease degree of conformation. All expected inaccuracies, like radiation dose determination, volume calculation, complete evaluation of the full extent of the tumour, biological behaviour of specific tumour types, organ motion during radiotherapy, imaging, biological/molecular uncertainties, sub-clinical diseases, microscopic spread of the disease, uncertainty in normal tissue responses and radiation morbidity need sound appreciation. Conformity can be increased by reduction of such inaccuracies. With the yearly increase in computing speed and advancement in other technologies the future will provide the opportunity to optimize a greater number of variables and reduce the errors in the treatment planning process. In multi-disciplined task of radiotherapy, efforts are needed to overcome the errors and uncertainty, not only by the physicists but also by radiologists, pathologists and oncologists to reduce molecular and biological uncertainties. The radiation therapy physics is advancing towards an optimal goal that is definitely to improve accuracy where necessary and to reduce uncertainty where possible. (author)

  19. Generalised Assignment Matrix Methodology in Linear Programming

    Jerome, Lawrence

    2012-01-01

    Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…

  20. Progress report on the SLAC Linear Collider

    Rees, J.

    1986-06-01

    The SLAC Linear Collider project (SLC) is reported as being near completion. The performance specifications are tabulated both for the initial form and for eventual goals. Various parts of the SLC are described and the status of their construction is reported, including the front end electron gun and booster, the linac, damping ring, positron source, SLC arcs, and conventional facilities. 5 refs., 12 figs

  1. Linearization of the Lorenz system

    Li, Chunbiao; Sprott, Julien Clinton; Thio, Wesley

    2015-01-01

    A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation

  2. Topics in computational linear optimization

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  3. Linearization of the Lorenz system

    Li, Chunbiao, E-mail: goontry@126.com [School of Electronic & Information Engineering, Nanjing University of Information Science & Technology, Nanjing 210044 (China); Engineering Technology Research and Development Center of Jiangsu Circulation Modernization Sensor Network, Jiangsu Institute of Commerce, Nanjing 211168 (China); Sprott, Julien Clinton [Department of Physics, University of Wisconsin–Madison, Madison, WI 53706 (United States); Thio, Wesley [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 (United States)

    2015-05-08

    A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation.

  4. Method of Increasing Identification Accuracy under Experimental Tests of Dynamic Objects

    Y. N. Pavlov

    2015-01-01

    Full Text Available The work concerns a problem of increasing identification accuracy of linear dynamic systems on the basis of experimental data obtained by applying test signals to the system.The work is aimed at considering a possibility to use the experimentally obtained hodograph counting to determine parameters of this system in a specific context of the linear dynamic system of the second order.An offer was to use a method of harmonious linearization and a described cut method.The type of frequency transfer function of the identified system was assumed as known.It was supposed that when obtaining the frequency characteristics of a real system there are disturbances interfering with experiment as a result of which points of experimentally received hodograph are random displaced.An identification problem solution was searched in a class of the hodograph set by the system model, which had the same type of frequency transfer function, as the type of frequency transfer function of the identified system.The unknown coefficients of frequency transfer function of the system model were searched through minimizing a proximity criterion (measure of the experimentally received hodograph of the system and of the system model hodograph over the entire aggregate of points. One of the authors described this criterion in the earlier publication.The solution to a problem of nonlinear dynamic system identification by the frequency hodograph was reduced to the solution of the system of equations of the rather unknown linear parameters of frequency transfer function of the system model.The program to simulate a process of the pseudo-experimental data, containing random errors, and determine parameters of this system is developed for a dynamic system of the second order.A conducted computing experiment is conducted to estimate an error at which the offered algorithm defines the values of parameters of this system.

  5. On the linear programming bound for linear Lee codes.

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  6. Recovery coefficients as a test of system linearity of response in PET

    Geworski, L.; Munz, D.L.; Knoop, B.; Hofmann, M.; Knapp, W.H.

    2002-01-01

    Aim: New imaging protocols have created an increasing demand for quantitation in dedicated PET. Besides attenuation and scatter correction the recovery correction, accounting for the instrument's limited spatial resolution, has gained importance. For clinical practicability these corrections should work independent from the object, i.e. from the actual distribution of emitter and absorber. Aim of the study was to test this object independency, i.e. system linearity of response, by comparing recovery coefficients (RC) determined for different object geometries. In fact, this comparison may serve as a final test on system linearity of response, as measured on the quantitative accuracy by which the activity concentration in small lesions can be recovered. Method: For hot and cold spot imaging situations spatial distribution of activity is different. Therefore, scatter correction algorithm has to deal with different scatter distributions. If all factors disturbing system linearity, specifically scatter and attenuation, are corrected to a sufficient degree of accuracy, the system behaves linearly resulting in the theoretical relationship. CSRC = (1-HSRC). Thus, this equation, applied hot and cold spot measurements, will serve as a test on the effectiveness of the corrections and, hence, as a test of system linearity of response. Following IEC standard procedures (IEC 61675-1) measurements were done with and without interplane septa (2D/3D) on an ECAT EXACT 922 using a cylindrical phantom containing six spheres of different diameters (10 mm - 40 mm). All data were corrected for attenuation (transmission scan) and scatter (2D: deconvolution, 3D: scatter model), as implemented in the scanner's standard software. Recovery coefficients were determined for cold (CSRC) and hot (HSRC) lesions using both 2D and 3D acquisition mode. Results: CSRC directly measured versus CSRC calculated according to eq. (1) from HSRC resulted in an excellent agreement for both 2D and 3D data

  7. The linear sizes tolerances and fits system modernization

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  8. Accuracy of a reformulated fast-set vinyl polysiloxane impression material using dual-arch trays.

    Kang, Alex H; Johnson, Glen H; Lepe, Xavier; Wataha, John C

    2009-05-01

    A common technique used for making crown impressions involves use of a vinyl polysiloxane impression material in combination with a dual-arch tray. A leading dental manufacturer has reformulated its vinyl polysiloxane (VPS) impression line, but the accuracy of the new material has not been verified. The purpose of this study was to assess the accuracy of reformulated VPS impression materials using the single-step dual-arch impression technique. Dual-arch impressions were made on a typodont containing a master stainless steel standard crown preparation die, from which gypsum working dies were formed, recovered, and measured. The impression materials evaluated were Imprint 3 Penta Putty with Quick Step Regular Body (IP-0); Imprint 3 Penta Quick Step Heavy Body with Quick Step Light Body (IP-1); Aquasil Ultra Rigid Fast Set with LV Fast Set (AQ-1); and Aquasil Ultra Heavy Fast Set with XLV Fast Set (AQ-2) (n=10). All impressions were disinfected with CaviCide spray for 10 minutes prior to pouring with type IV gypsum. Buccolingual (BL), mesiodistal (MD), and occlusogingival (OG) dimensions were measured and compared to the master die using an optical measuring microscope. Linear dimensional change was also assessed for IP-0 and AQ-1 at 1 and 24 hours based on ANSI/ADA Specification No. 19. Single-factor ANOVA with Dunnett's T3 multiple comparisons was used to compare BL, MD, and OG changes, with hypothesis testing at alpha=.05. A repeated-measures ANOVA was used to compare linear dimensional changes. There were statistical differences among the 4 impression systems for 3 of 4 dimensions of the master die. IP-0 working dies were significantly larger in MD and OG-L dimensions but significantly smaller in the BL dimension. IP-1 working dies were significantly smaller in the BL dimension compared to the master die. With the exception of IP-0, differences detected were small and clinically insignificant. No significant differences were observed for linear dimensional change

  9. Introduction to linear elasticity

    Gould, Phillip L

    2013-01-01

    Introduction to Linear Elasticity, 3rd Edition, provides an applications-oriented grounding in the tensor-based theory of elasticity for students in mechanical, civil, aeronautical, and biomedical engineering, as well as materials and earth science. The book is distinct from the traditional text aimed at graduate students in solid mechanics by introducing the subject at a level appropriate for advanced undergraduate and beginning graduate students. The author's presentation allows students to apply the basic notions of stress analysis and move on to advanced work in continuum mechanics, plasticity, plate and shell theory, composite materials, viscoelasticity and finite method analysis. This book also:  Emphasizes tensor-based approach while still distilling down to explicit notation Provides introduction to theory of plates, theory of shells, wave propagation, viscoelasticity and plasticity accessible to advanced undergraduate students Appropriate for courses following emerging trend of teaching solid mechan...

  10. Linear step drive

    Haniger, L.; Elger, R.; Kocandrle, L.; Zdebor, J.

    1986-01-01

    A linear step drive is described developed in Czechoslovak-Soviet cooperation and intended for driving WWER-1000 control rods. The functional principle is explained of the motor and the mechanical and electrical parts of the drive, power control, and the indicator of position are described. The motor has latches situated in the reactor at a distance of 3 m from magnetic armatures, it has a low structural height above the reactor cover, which suggests its suitability for seismic localities. Its magnetic circuits use counterpoles; the mechanical shocks at the completion of each step are damped using special design features. The position indicator is of a special design and evaluates motor position within ±1% of total travel. A drive diagram and the flow chart of both the control electronics and the position indicator are presented. (author) 4 figs

  11. Linear pulse amplifier

    Tjutju, R.L.

    1977-01-01

    Pulse amplifier is standard significant part of spectrometer. Apart from other type of amplification, it's a combination of amplification and pulse shaping. Because of its special purpose the device should fulfill the following : High resolution is desired to gain a high yield comparable to its actual state of condition. High signal to noise is desired to nhν resolution. High linearity to facilitate calibration. A good overload recovery, in order to the device will capable of analizing a low energy radiation which appear joinly on the high energy fields. Other expections of the device are its economical and practical use its extentive application. For that reason it's built on a standard NIM principle. Taking also into account the above mentioned considerations. High quality component parts are used throughout, while its availability in the domestic market is secured. (author)

  12. Linear Accelerator Laboratory

    1976-01-01

    This report covers the activity of the Linear Accelerator Laboratory during the period June 1974-June 1976. The activity of the Laboratory is essentially centered on high energy physics. The main activities were: experiments performed with the colliding rings (ACO), construction of the new colliding rings and beginning of the work at higher energy (DCI), bubble chamber experiments with the CERN PS neutrino beam, counter experiments with CERN's PS and setting-up of equipment for new experiments with CERN's SPS. During this period a project has also been prepared for an experiment with the new PETRA colliding ring at Hamburg. On the other hand, intense collaboration with the LURE Laboratory, using the electron synchrotron radiation emitted by ACO and DCI, has been developed [fr

  13. HEAVY ION LINEAR ACCELERATOR

    Van Atta, C.M.; Beringer, R.; Smith, L.

    1959-01-01

    A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.

  14. Linear absorptive dielectrics

    Tip, A.

    1998-06-01

    Starting from Maxwell's equations for a linear, nonconducting, absorptive, and dispersive medium, characterized by the constitutive equations D(x,t)=ɛ1(x)E(x,t)+∫t-∞dsχ(x,t-s)E(x,s) and H(x,t)=B(x,t), a unitary time evolution and canonical formalism is obtained. Given the complex, coordinate, and frequency-dependent, electric permeability ɛ(x,ω), no further assumptions are made. The procedure leads to a proper definition of band gaps in the periodic case and a new continuity equation for energy flow. An S-matrix formalism for scattering from lossy objects is presented in full detail. A quantized version of the formalism is derived and applied to the generation of Čerenkov and transition radiation as well as atomic decay. The last case suggests a useful generalization of the density of states to the absorptive situation.

  15. Improvement of the thermal behavior of linear motors through insulation layer

    Eun, I. U.; Lee, C. M.; Chung, W. J.; Choi, Y. H.

    2001-01-01

    Linear motors can drive a linear motion without intermediate gears, screws or crank shafts. Linear motors can successfully replace ball lead screw in machine tools, because they have a high velocity, acceleration and good positioning accuracy. On the other hand, linear motors emit large amounts of heat and have low efficiency. In this paper, heat sources of a synchronous linear motor with high velocity and force are measured and analyzed. To improve the thermal stiffness of the linear motor, an insulation layer with low thermal conductivity is inserted between cooler and machine table. Some effects of the insulation layer are presented

  16. Airborne LiDAR reflective linear feature extraction for strip adjustment and horizontal accuracy determination.

    2009-02-01

    ODOT's Office of Aerial Engineering (OAE) has been using an Opetch 30/70 ALTM airborne LiDAR system for about four years. The introduction of LiDAR technology was a major development towards improving the mapping operations. The overall experiences a...

  17. Need for Linear Revitalization - Gdynia Case

    Sas-Bojarska, Aleksandra

    2017-10-01

    The aim of the article is to discuss the need of defining and implementation of the linear revitalization - the new approach related to the revitalization processes. The results of the preliminary investigations indicate that this kind of revitalization seems to be an important tool supporting city management and planning, especially in cases of cities fragmentation - causing lack of physical, social, economic and ecological cohesion. The problems which may occur in such situations could be, in author’s opinion, solved with the use of linear revitalization. Linear revitalization relates to various linear city structures, which need a renewal. The article presents the idea of new attitude, character of specific actions related to degraded linear structures, draft classification, as well as the potential benefits to the city structure which could be reached due to the linear revitalization implementation. The theoretical deliberations are supplemented by the description and assessment of the chosen case study from Gdynia in Poland. The Kwiatkowskiego Route in Gdynia, playing important role in the city traffic as the external connection, creates the barrier in the city structure, causing many negative effects. Author presents specific problems related to chosen example, and the ways to solve them and to connect city structure. The main conclusion of the study is that the presented approach may be, in author’s opinion, the beginning of the discussion related to the linear revitalization, which may become an important and effective tool of sustainable city development. It may help overcoming physical barriers, and minimise functional, economic, social, mental and environmental conflicts caused by city fragmentation.

  18. Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.

    de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo

    2018-03-01

    Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.

  19. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  20. Computer Program For Linear Algebra

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  1. Quaternion Linear Canonical Transform Application

    Bahri, Mawardi

    2015-01-01

    Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter

  2. Recursive Algorithm For Linear Regression

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  3. Dynamical systems and linear algebra

    Colonius, Fritz (Prof.)

    2007-01-01

    Dynamical systems and linear algebra / F. Colonius, W. Kliemann. - In: Handbook of linear algebra / ed. by Leslie Hogben. - Boca Raton : Chapman & Hall/CRC, 2007. - S. 56,1-56,22. - (Discrete mathematics and its applications)

  4. Linear spaces: history and theory

    Albrecht Beutelspracher

    1990-01-01

    Linear spaces belong to the most fundamental geometric and combinatorial structures. In this paper I would like to give an onerview about the theory of embedding finite linear spaces in finite projective planes.

  5. Diagnostic accuracy of fine needle aspiration cytology in patients ...

    Objective: To estimate the sensitivity and specificity of FNAC in detecting malignancy for thyroid disease using histopathology ... thyroid carcinoma being the most predominant type and colloid goiter was the most predominant benign thyroid disease. The sensitivity ..... tic accuracy of fine needle aspiration cytology in thyroid.

  6. The diagnostic accuracy of integrated positron emission tomography ...

    The PET-CT findings, including maximum standardised uptake value (SUVmax), were compared with the gold standard (tissue or microbiological diagnosis). The sensitivity, specificity, positive and negative predictive values and diagnostic accuracy for malignant disease were calculated according to the SUVmax cut-off of ...

  7. Increasing Free Throw Accuracy through Behavior Modeling and Goal Setting.

    Erffmeyer, Elizabeth S.

    A two-year behavior-modeling training program focusing on attention processes, retention processes, motor reproduction, and motivation processes was implemented to increase the accuracy of free throw shooting for a varsity intercollegiate women's basketball team. The training included specific learning keys, progressive relaxation, mental…

  8. Predictive Accuracy of Exercise Stress Testing the Healthy Adult.

    Lamont, Linda S.

    1981-01-01

    Exercise stress testing provides information on the aerobic capacity, heart rate, and blood pressure responses to graded exercises of a healthy adult. The reliability of exercise tests as a diagnostic procedure is discussed in relation to sensitivity and specificity and predictive accuracy. (JN)

  9. Linear Malignant Melanoma In Situ: Reports and Review of Cutaneous Malignancies Presenting as Linear Skin Cancer.

    Cohen, Philip R

    2017-09-18

    Melanomas usually present as oval lesions in which the borders may be irregular. Other morphological features of melanoma include clinical asymmetry, variable color, diameter greater than 6 mm and evolving lesions. Two males whose melanoma in situ presented as linear skin lesions are described and cutaneous malignancies that may appear linear in morphology are summarized in this report. A medical literature search engine, PubMed, was used to search the following terms: cancer, cutaneous, in situ, linear, malignant, malignant melanoma, melanoma in situ, neoplasm, and skin. The 25 papers that were generated by the search and their references, were reviewed; 10 papers were selected for inclusion. The cancer of the skin typically presents as round lesions. However, basal cell carcinoma and squamous cell carcinoma may arise from primary skin conditions or benign skin neoplasms such as linear epidermal nevus and linear porokeratosis. In addition, linear tumors such as basal cell carcinoma can occur. The development of linear cutaneous neoplasms may occur secondary to skin tension line or embryonal growth patterns (as reflected by the lines of Langer and lines of Blaschko) or exogenous factors such as prior radiation therapy. Cutaneous neoplasms and specifically melanoma in situ can be added to the list of linear skin lesions.

  10. Linear versus non-linear supersymmetry, in general

    Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2016-04-12

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  11. Linear versus non-linear supersymmetry, in general

    Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm

    2016-01-01

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  12. Simulation of a medical linear accelerator for teaching purposes.

    Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco

    2015-05-08

    Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.

  13. Linear Temporal Logic-based Mission Planning

    Anil Kumar

    2016-06-01

    Full Text Available In this paper, we describe the Linear Temporal Logic-based reactive motion planning. We address the problem of motion planning for mobile robots, wherein the goal specification of planning is given in complex environments. The desired task specification may consist of complex behaviors of the robot, including specifications for environment constraints, need of task optimality, obstacle avoidance, rescue specifications, surveillance specifications, safety specifications, etc. We use Linear Temporal Logic to give a representation for such complex task specification and constraints. The specifications are used by a verification engine to judge the feasibility and suitability of plans. The planner gives a motion strategy as output. Finally a controller is used to generate the desired trajectory to achieve such a goal. The approach is tested using simulations on the LTLMoP mission planning tool, operating over the Robot Operating System. Simulation results generated using high level planners and low level controllers work simultaneously for mission planning and controlling the physical behavior of the robot.

  14. A primer on linear models

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  15. Linear Logical Voting Protocols

    DeYoung, Henry; Schürmann, Carsten

    2012-01-01

    Current approaches to electronic implementations of voting protocols involve translating legal text to source code of an imperative programming language. Because the gap between legal text and source code is very large, it is difficult to trust that the program meets its legal specification. In r...

  16. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  17. Data accuracy assessment using enterprise architecture

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  18. Linear control theory for gene network modeling.

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  19. Cut elimination in multifocused linear logic

    Guenot, Nicolas; Brock-Nannestad, Taus

    2015-01-01

    We study cut elimination for a multifocused variant of full linear logic in the sequent calculus. The multifocused normal form of proofs yields problems that do not appear in a standard focused system, related to the constraints in grouping rule instances in focusing phases. We show that cut...... elimination can be performed in a sensible way even though the proof requires some specific lemmas to deal with multifocusing phases, and discuss the difficulties arising with cut elimination when considering normal forms of proofs in linear logic....

  20. Templates for Linear Algebra Problems

    Bai, Z.; Day, D.; Demmel, J.; Dongarra, J.; Gu, M.; Ruhe, A.; Vorst, H.A. van der

    1995-01-01

    The increasing availability of advanced-architecture computers is having a very signicant eect on all spheres of scientic computation, including algorithm research and software development in numerical linear algebra. Linear algebra {in particular, the solution of linear systems of equations and

  1. Linearization of CIF through SOS

    Nadales Agut, D.E.; Reniers, M.A.; Luttik, B.; Valencia, F.

    2011-01-01

    Linearization is the procedure of rewriting a process term into a linear form, which consist only of basic operators of the process language. This procedure is interesting both from a theoretical and a practical point of view. In particular, a linearization algorithm is needed for the Compositional

  2. Linear Logic on Petri Nets

    Engberg, Uffe Henrik; Winskel, Glynn

    This article shows how individual Petri nets form models of Girard's intuitionistic linear logic. It explores questions of expressiveness and completeness of linear logic with respect to this interpretation. An aim is to use Petri nets to give an understanding of linear logic and give some apprai...

  3. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  4. Accuracy and Consistency of Respiratory Gating in Abdominal Cancer Patients

    Ge, Jiajia; Santanam, Lakshmi; Yang, Deshan; Parikh, Parag J.

    2013-01-01

    Purpose: To evaluate respiratory gating accuracy and intrafractional consistency for abdominal cancer patients treated with respiratory gated treatment on a regular linear accelerator system. Methods and Materials: Twelve abdominal patients implanted with fiducials were treated with amplitude-based respiratory-gated radiation therapy. On the basis of daily orthogonal fluoroscopy, the operator readjusted the couch position and gating window such that the fiducial was within a setup margin (fiducial-planning target volume [f-PTV]) when RPM indicated “beam-ON.” Fifty-five pre- and post-treatment fluoroscopic movie pairs with synchronized respiratory gating signal were recorded. Fiducial motion traces were extracted from the fluoroscopic movies using a template matching algorithm and correlated with f-PTV by registering the digitally reconstructed radiographs with the fluoroscopic movies. Treatment was determined to be “accurate” if 50% of the fiducial area stayed within f-PTV while beam-ON. For movie pairs that lost gating accuracy, a MATLAB program was used to assess whether the gating window was optimized, the external-internal correlation (EIC) changed, or the patient moved between movies. A series of safety margins from 0.5 mm to 3 mm was added to f-PTV for reassessing gating accuracy. Results: A decrease in gating accuracy was observed in 44% of movie pairs from daily fluoroscopic movies of 12 abdominal patients. Three main causes for inaccurate gating were identified as change of global EIC over time (∼43%), suboptimal gating setup (∼37%), and imperfect EIC within movie (∼13%). Conclusions: Inconsistent respiratory gating accuracy may occur within 1 treatment session even with a daily adjusted gating window. To improve or maintain gating accuracy during treatment, we suggest using at least a 2.5-mm safety margin to account for gating and setup uncertainties

  5. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  6. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  7. Linear particle accelerator

    Richards, J.A.

    1977-01-01

    A linear particle accelerator which provides a pulsed beam of charged particles of uniform energy is described. The accelerator is in the form of an evacuated dielectric tube, inside of which a particle source is located at one end of the tube, with a target or window located at the other end of the dielectric tube. Along the length of the tube are externally located pairs of metal plates, each insulated from each other in an insulated housing. Each of the plates of a pair are connected to an electrical source of voltage of opposed polarity, with the polarity of the voltage of the plates oriented so that the plate of a pair, nearer to the particle source, is of the opposed polarity to the charge of the particle emitted by the source. Thus, a first plate about the tube located nearest the particle source, attracts a particle which as it passes through the tube past the first plate is then repelled by the reverse polarity of the second plate of the pair to continue moving towards the target

  8. Generalized Linear Covariance Analysis

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  9. Equipartitioning in linear accelerators

    Jameson, R.A.

    1982-01-01

    Emittance growth has long been a concern in linear accelerators, as has the idea that some kind of energy balance, or equipartitioning, between the degrees of freedom, would ameliorate the growth. M. Prome observed that the average transverse and longitudinal velocity spreads tend to equalize as current in the channel is increased, while the sum of the energy in the system stays nearly constant. However, only recently have we shown that an equipartitioning requirement on a bunched injected beam can indeed produce remarkably small emittance growth. The simple set of equations leading to this condition are outlined. At the same time, Hofmann has investigated collective instabilities in transported beams and has identified thresholds and regions in parameter space where instabilities occur. Evidence is presented that shows transport system boundaries to be quite accurate in computer simulations of accelerating systems. Discussed are preliminary results of efforts to design accelerators that avoid parameter regions where emittance is affected by the instabilities identified by Hofmann. These efforts suggest that other mechanisms are present. The complicated behavior of the RFQ linac in this framework also is shown

  10. Equipartitioning in linear accelerators

    Jameson, R.A.

    1981-01-01

    Emittance growth has long been a concern in linear accelerators, as has the idea that some kind of energy balance, or equipartitioning, between the degrees of freedom, would ameliorate the growth. M. Prome observed that the average transverse and longitudinal velocity spreads tend to equalize as current in the channel is increased, while the sum of the energy in the system stays nearly constant. However, only recently have we shown that an equipartitioning requirement on a bunched injected beam can indeed produce remarkably small emittance growth. The simple set of equations leading to this condition are outlined below. At the same time, Hofmann, using powerful analytical and computational methods, has investigated collective instabilities in transported beams and has identified thresholds and regions in parameter space where instabilities occur. This is an important generalization. Work that he will present at this conference shows that the results are essentially the same in r-z coordinates for transport systems, and evidence is presented that shows transport system boundaries to be quite accurate in computer simulations of accelerating systems also. Discussed are preliminary results of efforts to design accelerators that avoid parameter regions where emittance is affected by the instabilities identified by Hofmann. These efforts suggest that other mechanisms are present. The complicated behavior of the RFQ linac in this framework also is shown

  11. Linear induction accelerators

    Briggs, R.J.

    1986-06-01

    The development of linear induction accelerators has been motivated by applications requiring high-pulsed currents of charged particles at voltages exceeding the capability of single-stage, diode-type accelerators and at currents too high for rf accelerators. In principle, one can accelerate charged particles to arbitrarily high voltages using a multi-stage induction machine, but the 50-MeV, 10-kA Advanced Test Accelerator (ATA) at LLNL is the highest voltage machine in existence at this time. The advent of magnetic pulse power systems makes sustained operation at high-repetition rates practical, and this capability for high-average power is very likely to open up many new applications of induction machines in the future. This paper surveys the US induction linac technology with primary emphasis on electron machines. A simplified description of how induction machines couple energy to the electron beam is given, to illustrate many of the general issues that bound the design space of induction linacs

  12. Berkeley Proton Linear Accelerator

    Alvarez, L. W.; Bradner, H.; Franck, J.; Gordon, H.; Gow, J. D.; Marshall, L. C.; Oppenheimer, F. F.; Panofsky, W. K. H.; Richman, C.; Woodyard, J. R.

    1953-10-13

    A linear accelerator, which increases the energy of protons from a 4 Mev Van de Graaff injector, to a final energy of 31.5 Mev, has been constructed. The accelerator consists of a cavity 40 feet long and 39 inches in diameter, excited at resonance in a longitudinal electric mode with a radio-frequency power of about 2.2 x 10{sup 6} watts peak at 202.5 mc. Acceleration is made possible by the introduction of 46 axial "drift tubes" into the cavity, which is designed such that the particles traverse the distance between the centers of successive tubes in one cycle of the r.f. power. The protons are longitudinally stable as in the synchrotron, and are stabilized transversely by the action of converging fields produced by focusing grids. The electrical cavity is constructed like an inverted airplane fuselage and is supported in a vacuum tank. Power is supplied by 9 high powered oscillators fed from a pulse generator of the artificial transmission line type.

  13. Effects of gamma radiation from 60Co on dilute aqueous solutions of Linear Alkyl Sulfonate Surfactants and other organic pollutants

    Rohrer, D.M.

    1975-01-01

    This study is the result of research findings and operational experiences gained by the author in over four years of work associated with the use of 60 Co for the treatment of waste-water. The effects of 60 Co are discussed with regard to radiochemical destruction of specific organic pollutant species. The study deals specifically with the effects of gamma radiation from a 30,000 Ci 60 Co source upon aqueous solutions of Linear Alkyl Sulfonate Surfactants. The new Linear Alkyl Sulfonate (LAS) Surfactants, the major surfactant produced in the United States of America since June 1965, was developed to replace the old Alkyl Benzene Sulfonate (ABS) Surfactants. The reason for the removal of Alkyl Benzene Sulfonate Surfactants was their extreme environmental stability and the associated appearance of foam in waste-water treatment plants and receiving streams. Although the Linear Alkyl Sulfonate Surfactants are considered 'bio-degradable', the time required for 'bio-degradation' is impractical within the present environmental guidelines. This led to research into alternate techniques of treatment for the destruction of Linear Alkyl Sulfonate Surfactants. Consideration is also given to similar effects of gamma radiation upon pesticides and to the practical aspects of the use of gamma radiation for the treatment of waste-water. Included are discussions of the general experimental procedures used, the sources and their calibration, and sampling techniques to ensure the accuracy of the data. (author)

  14. Random linear codes in steganography

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  15. Diagnostic accuracy of the Eurotest for dementia: a naturalistic, multicenter phase II study

    Frank Ana

    2006-04-01

    Full Text Available Abstract Background Available screening tests for dementia are of limited usefulness because they are influenced by the patient's culture and educational level. The Eurotest, an instrument based on the knowledge and handling of money, was designed to overcome these limitations. The objective of this study was to evaluate the diagnostic accuracy of the Eurotest in identifying dementia in customary clinical practice. Methods A cross-sectional, multi-center, naturalistic phase II study was conducted. The Eurotest was administered to consecutive patients, older than 60 years, in general neurology clinics. The patients' condition was classified as dementia or no dementia according to DSM-IV diagnostic criteria. We calculated sensitivity (Sn, specificity (Sp and area under the ROC curves (aROC with 95% confidence intervals. The influence of social and educational factors on scores was evaluated with multiple linear regression analysis, and the influence of these factors on diagnostic accuracy was evaluated with logistic regression. Results Sixteen neurologists recruited a total of 516 participants: 101 with dementia, 380 without dementia, and 35 who were excluded. Of the 481 participants who took the Eurotest, 38.7% were totally or functionally illiterate and 45.5% had received no formal education. Mean time needed to administer the test was 8.2+/-2.0 minutes. The best cut-off point was 20/21, with Sn = 0.91 (0.84–0.96, Sp = 0.82 (0.77–0.85, and aROC = 0.93 (0.91–0.95. Neither the scores on the Eurotest nor its diagnostic accuracy were influenced by social or educational factors. Conclusion This naturalistic and pragmatic study shows that the Eurotest is a rapid, simple and useful screening instrument, which is free from educational influences, and has appropriate internal and external validity.

  16. Diagnostic accuracy of sonography for pleural effusion: systematic review

    Alexandre Grimberg

    Full Text Available CONTEXT AND OBJECTIVE: The initial method for evaluating the presence of pleural effusion was chest radiography. Isolated studies have shown that sonography has greater accuracy than radiography for this diagnosis; however, no systematic reviews on this matter are available in the literature. Thus, the aim of this study was to evaluate the accuracy of sonography in detecting pleural effusion, by means of a systematic review of the literature. DESIGN AND SETTING: This was a systematic review with meta-analysis on accuracy studies. This study was conducted in the Department of Diagnostic Imaging and in the Brazilian Cochrane Center, Discipline of Emergency Medicine and Evidence-Based Medicine, Department of Medicine, Universidade Federal de São Paulo (Unifesp, São Paulo, Brazil. METHOD: The following databases were searched: Cochrane Library, Medline, Web of Science, Embase and Literatura Latino-Americana e do Caribe em Ciências da Saúde (Lilacs. The references of relevant studies were also screened for additional citations of interest. Studies in which the accuracy of sonography for detecting pleural effusion was tested, with an acceptable reference standard (computed tomography or thoracic drainage, were included. RESULTS: Four studies were included. All of them showed that sonography had high sensitivity, specificity and accuracy for detecting pleural effusions. The mean sensitivity was 93% (95% confidence interval, CI: 89% to 96%, and specificity was 96% (95% CI: 95% to 98%. CONCLUSIONS: In different populations and clinical settings, sonography showed consistently high sensitivity, specificity and accuracy for detecting fluid in the pleural space.

  17. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    Elena Bergamini

    2014-10-01

    Full Text Available Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter and complementary (Non-linear observer filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles and heading (yaw angle errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.

  18. Linear systems a measurement based approach

    Bhattacharyya, S P; Mohsenizadeh, D N

    2014-01-01

    This brief presents recent results obtained on the analysis, synthesis and design of systems described by linear equations. It is well known that linear equations arise in most branches of science and engineering as well as social, biological and economic systems. The novelty of this approach is that no models of the system are assumed to be available, nor are they required. Instead, a few measurements made on the system can be processed strategically to directly extract design values that meet specifications without constructing a model of the system, implicitly or explicitly. These new concepts are illustrated by applying them to linear DC and AC circuits, mechanical, civil and hydraulic systems, signal flow block diagrams and control systems. These applications are preliminary and suggest many open problems. The results presented in this brief are the latest effort in this direction and the authors hope these will lead to attractive alternatives to model-based design of engineering and other systems.

  19. Linear mixed models for longitudinal data

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  20. Technical accuracy in historical writing

    Taylor, L.S.

    1981-01-01

    A guest editorial is presented on the question of accuracy in the writing of radiation protection history. The author has written several books and articles dealing with various aspects of the development of radiation protection standards and philosophy; some of his own minor errors which have been picked up and frequently repeated are confessed. The author also outlines some of the general faults he has encountered in other articles on the subject. A common complaint is that many writers give source references without checking back to the original sources which leads to much carelessness and misunderstanding in technical writing. In addition, some writers all too frequently refer mainly to review articles which can be especially troublesome if the review is of the interpretative type. The limited outlook of some writers is also deplored in that the scope of the literature referred to is often limited to the author's country. A few glaring examples of factual errors encountered in various radiation protection articles are highlighted; these errors have since been repeated in subsequent review articles. (U.K.)

  1. Mammography: Technique and diagnostic accuracy

    Kim, Chung Ja; Bahk, Yong Whee; Lee, Don Young

    1974-01-01

    Mammography is now in world wide use, But this has received rather scanty attention in Korea. The purposes of the present communication are twofold: (1) Detailing of technical and photographic aspects of mam in ography and (2) an assessment of its diagnostic accuracy as experienced by us. The clinical materials consisted of 88 cases of mammography performed at the Department of Radiology, St. Mary's Hospital, Catholic Medical College during the 2 years-period from April 1972. We used nonscreen type mammographic or industrial fine- grain films, and a special mammographic device that can be attached to any of the ordinary radiographic machine. Technical factors are shown in Table II. Of 88 cases 19 were operated on or biopsied. There were 7 cases of carcinoma. 8 cases of inflammatory diseases, and 4 cases of benign tumor. Mammographic diagnosis was correct in 85.7% of carcinoma and 87.5% of inflammatory diseases. One misdiagnosis of 7 cases of carcinoma was turned out to be cystosarcoma phylloides. Of 4 cases of benign tumors 2 were correctly diagnosed, and the other 2 mistaken for either inflammatory disease or simple lactating breast. However, none of the benign conditions were diagnosed as malignant process. We found that nonscreen type mammographic or industrial fine-grain films, and hand-processing were necessary in obtaining the mammograms of desirable quality

  2. Accuracy of recumbent height measurement.

    Gray, D S; Crider, J B; Kelley, C; Dickinson, L C

    1985-01-01

    Since many patients requiring specialized nutritional support are bedridden, measurement of height for purposes of nutritional assessment or prescription must often be done with the patient in bed. This study examined the accuracy of measuring body height in bed in the supine position. Two measurements were performed on 108 ambulatory inpatients: (1) standing height using a standard height-weight scale, and (2) bed height using a flexible tape. Patients were divided into four groups based on which of two researchers performed each of the two measurements. Each patient was also weighed and self-reported height, weight, sex, and age were recorded. Bed height was significantly longer than standing height by 3.68 cm, but the two measurements were equally precise. It was believed, however, that this 2% difference was probably not clinically significant in most circumstances. Bed height correlated highly with standing height (r = 0.95), and the regression equation was standing height = 13.82 +/- 0.09 bed height. Patients overestimated their heights. Heights recorded by nurses were more accurate when patients were measured than when asked about their heights, but the patients were more often asked than measured.

  3. Acceptance test for the linear motion actuator for the scanning slit of the HIE-ISOLDE short diagnostic boxes

    Cantero, E D; Bravin, E; Sosa, A

    2014-01-01

    We performed experimental tests to characterize the mechanical accuracy of a linear actuator designed by the company AVS for the movement of the scanning slit of the HIE-ISOLDE short diagnostic boxes. The mechanism consists of a linear actuator composed of two guiding rods and a lead screw, with a full stroke of 135 mm. A specially designed blade was mounted on the actuator and the transverse positioning of the blade was monitored with a camera-based optical system while moving the actuator at speeds of up to 10 mm/s. The repeatability of the positioning of the blade after several cycles around predefined positions was also measured. The results of the measurements and a general inspection of the device show that the proposed solution fulfils the specifications. A full prototype of short diagnostic box for the HIE-ISOLDE project can now be built for testing.

  4. Serum albumin: accuracy and clinical use.

    Infusino, Ilenia; Panteghini, Mauro

    2013-04-18

    Albumin is the major plasma protein and its determination is used for the prognostic assessment of several diseases. Clinical guidelines call for monitoring of serum albumin with specific target cut-offs that are independent of the assay used. This requires accurate and equivalent results among different commercially available methods (i.e., result standardization) through a consistent definition and application of a reference measurement system. This should be associated with the definition of measurement uncertainty goals based on medical relevance of serum albumin to make results reliable for patient management. In this paper, we show that, in the current situation, if one applies analytical goals for serum albumin measurement derived from its biologic variation, the uncertainty budget derived from each step of the albumin traceability chain is probably too high to fulfil established quality levels for albumin measurement and to guarantee the accuracy needed for clinical usefulness of the test. The situation is further worsened if non-specific colorimetric methods are used for albumin measurement as they represent an additional random source of uncertainty. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. The percentage of prostate-specific antigen (PSA) isoform [-2]proPSA and the Prostate Health Index improve the diagnostic accuracy for clinically relevant prostate cancer at initial and repeat biopsy compared with total PSA and percentage free PSA in men aged ≤65 years.

    Boegemann, Martin; Stephan, Carsten; Cammann, Henning; Vincendeau, Sébastien; Houlgatte, Alain; Jung, Klaus; Blanchet, Jean-Sebastien; Semjonow, Axel

    2016-01-01

    To prospectively test the diagnostic accuracy of the percentage of prostate specific antigen (PSA) isoform [-2]proPSA (%p2PSA) and the Prostate Health Index (PHI), and to determine their role for discrimination between significant and insignificant prostate cancer at initial and repeat prostate biopsy in men aged ≤65 years. The diagnostic performance of %p2PSA and PHI were evaluated in a multicentre study. In all, 769 men aged ≤65 years scheduled for initial or repeat prostate biopsy were recruited in four sites based on a total PSA (t-PSA) level of 1.6-8.0 ng/mL World Health Organization (WHO) calibrated (2-10 ng/mL Hybritech-calibrated). Serum samples were measured for the concentration of t-PSA, free PSA (f-PSA) and p2PSA with Beckman Coulter immunoassays on Access-2 or DxI800 instruments. PHI was calculated as (p2PSA/f-PSA × √t-PSA). Uni- and multivariable logistic regression models and an artificial neural network (ANN) were complemented by decision curve analysis (DCA). In univariate analysis %p2PSA and PHI were the best predictors of prostate cancer detection in all patients (area under the curve [AUC] 0.72 and 0.73, respectively), at initial (AUC 0.67 and 0.69) and repeat biopsy (AUC 0.74 and 0.74). t-PSA and %f-PSA performed less accurately for all patients (AUC 0.54 and 0.62). For detection of significant prostate cancer (based on Prostate Cancer Research International Active Surveillance [PRIAS] criteria) the %p2PSA and PHI equally demonstrated best performance (AUC 0.70 and 0.73) compared with t-PSA and %f-PSA (AUC 0.54 and 0.59). In multivariate analysis PHI we added to a base model of age, prostate volume, digital rectal examination, t-PSA and %f-PSA. PHI was strongest in predicting prostate cancer in all patients, at initial and repeat biopsy and for significant prostate cancer (AUC 0.73, 0.68, 0.78 and 0.72, respectively). In DCA for all patients the ANN showed the broadest threshold probability and best net benefit. PHI as single parameter

  6. Quad-copter UAV BLDC Motor Control: Linear v/s non-linear control maps

    Deep Parikh

    2015-08-01

    Full Text Available This paper presents some investigations and comparison of using linear versus non-linear static motor-control maps for the speed control of a BLDC (Brush Less Direct Current motors used in quad-copter UAV (Unmanned Aerial Vehicles. The motor-control map considered here is the inverse of the static map relating motor-speed output to motor-voltage input for a typical out-runner type Brushless DC Motors (BLDCM.  Traditionally, quad-copter BLDC motor speed control uses simple linear motor-control map defined by the motor-constant specification. However, practical BLDC motors show non-linear characteristic, particularly when operated across wide operating speed-range as is commonly required in quad-copter UAV flight operations. In this paper, our investigations to compare performance of linear versus non-linear motor-control maps are presented. The investigations cover simulation-based and experimental study of BLDC motor speed control systems for  quad-copter vehicle available. First the non-linear map relating rotor RPM to motor voltage for quad-copter BLDC motor is obtained experimentally using an optical speed encoder. The performance of the linear versus non-linear motor-control-maps for the speed control are studied. The investigations also cover study of time-responses for various standard test input-signals e.g. step, ramp and pulse inputs, applied as the reference speed-commands. Also, simple 2-degree of freedom test-bed is developed in our laboratory to help test the open-loop and closed-loop experimental investigations. The non-linear motor-control map is found to perform better in BLDC motor speed tracking control performance and thereby helping achieve better quad-copter roll-angle attitude control.

  7. Key Performance Indicators and Analysts' Earnings Forecast Accuracy: An Application of Content Analysis

    Alireza Dorestani; Zabihollah Rezaee

    2011-01-01

    We examine the association between the extent of change in key performance indicator (KPI) disclosures and the accuracy of forecasts made by analysts. KPIs are regarded as improving both the transparency and relevancy of public financial information. The results of using linear regression models show that contrary to our prediction and the hypothesis of this paper, there is no significant association between the change in non- financial KPI disclosures and the accuracy of analysts' forecasts....

  8. Accuracy of recommended sampling and assay methods for the determination of plasma-free and urinary fractionated metanephrines in the diagnosis of pheochromocytoma and paraganglioma: a systematic review.

    Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme

    2017-06-01

    To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p sampling compared with 24-h urine, 95 vs. 90% (p sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.

  9. Estimating the accuracy of geographical imputation

    Boscoe Francis P

    2008-01-01

    Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate

  10. Multi-Accuracy-Level Burning Plasma Simulations

    Artaud, J. F.; Basiuk, V.; Garcia, J.; Giruzzi, G.; Huynh, P.; Huysmans, G.; Imbeaux, F.; Johner, J.; Scheider, M.

    2007-01-01

    The design of a reactor grade tokamak is based on a hierarchy of tools. We present here three codes that are presently used for the simulations of burning plasmas. At the first level there is a 0-dimensional code that allows to choose a reasonable range of global parameters; in our case the HELIOS code was used for this task. For the second level we have developed a mixed 0-D / 1-D code called METIS that allows to study the main properties of a burning plasma, including profiles and all heat and current sources, but always under the constraint of energy and other empirical scaling laws. METIS is a fast code that permits to perform a large number of runs (a run takes about one minute) and design the main features of a scenario, or validate the results of the 0-D code on a full time evolution. At the top level, we used the full 1D1/2 suite of codes CRONOS that gives access to a detailed study of the plasma profiles evolution. CRONOS can use a variety of modules for source terms and transport coefficients computation with different level of complexity and accuracy: from simple estimators to highly sophisticated physics calculations. Thus it is possible to vary the accuracy of burning plasma simulations, as a trade-off with computation time. A wide range of scenario studies can thus be made with CRONOS and then validated with post-processing tools like MHD stability analysis. We will present in this paper results of this multi-level analysis applied to the ITER hybrid scenario. This specific example will illustrate the importance of having several tools for the study of burning plasma scenarios, especially in a domain that present devices cannot access experimentally. (Author)

  11. Improvement of Diagnostic Accuracy by Standardization in Diuretic Renal Scan

    Hyun, In Young; Lee, Dong Soo; Lee, Kyung Han; Chung, June Key; Lee, Myung Chul; Koh, Chang Soon; Kim, Kwang Myung; Choi, Hwang; Choi, Yong

    1995-01-01

    We evaluated diagnostic accuracy of diuretic renal scan with standardization in 45 children(107 hydronephrotic kidneys) with 91 diuretic assessments. Sensitivity was 100% specificity was 78%, and accuracy was 84% in 49 hydronephrotic kidneys with standardization. Diuretic renal scan without standardization, sensitivity was 100%, specificity was 38%, and accuracy was 57% in 58 hydronephrotic kidneys. The false-positive results were observed in 25 cases without standardization, and in 8 cases with standardization. In duretic renal scans without standardization, the causes of false-positive results were 10 early injection of lasix before mixing of radioactivity in loplsty, 6 extrarenal pelvis, and 3 immature kidneys of false-positive results were 2 markedly dilated systems postpyeloplsty, 2 etrarenal pevis, 1 immature kidney of neonate , and 2 severe renal dysfunction, 1 vesicoureteral, reflux. In diuretic renal scan without standardization the false-positive results by inadequate study were common, but false-positive results by inadequate study were not found after standardization. The false-positive results by dilated pelvo-calyceal systems postpyeloplsty, extrarenal pelvis, and immature kidneys of, neonates were not dissolved after standardization. In conclusion, diagnostic accuracy of diuretic renal scan with standardization was useful in children with renal outflow tract obstruction by improving specificity significantly.

  12. Genomic prediction based on data from three layer lines using non-linear regression models.

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  13. Linear Algebraic Method for Non-Linear Map Analysis

    Yu, L.; Nash, B.

    2009-01-01

    We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.

  14. Quasi-Linear Circuit

    Bradley, William; Bird, Ross; Eldred, Dennis; Zook, Jon; Knowles, Gareth

    2013-01-01

    This work involved developing spacequalifiable switch mode DC/DC power supplies that improve performance with fewer components, and result in elimination of digital components and reduction in magnetics. This design is for missions where systems may be operating under extreme conditions, especially at elevated temperature levels from 200 to 300 degC. Prior art for radiation-tolerant DC/DC converters has been accomplished utilizing classical magnetic-based switch mode converter topologies; however, this requires specific shielding and component de-rating to meet the high-reliability specifications. It requires complex measurement and feedback components, and will not enable automatic re-optimization for larger changes in voltage supply or electrical loading condition. The innovation is a switch mode DC/DC power supply that eliminates the need for processors and most magnetics. It can provide a well-regulated voltage supply with a gain of 1:100 step-up to 8:1 step down, tolerating an up to 30% fluctuation of the voltage supply parameters. The circuit incorporates a ceramic core transformer in a manner that enables it to provide a well-regulated voltage output without use of any processor components or magnetic transformers. The circuit adjusts its internal parameters to re-optimize its performance for changes in supply voltage, environmental conditions, or electrical loading at the output

  15. Extending the accuracy of the SNAP interatomic potential form

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-06-01

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.

  16. Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.

  17. High accuracy ion optics computing

    Amos, R.J.; Evans, G.A.; Smith, R.

    1986-01-01

    Computer simulation of focused ion beams for surface analysis of materials by SIMS, or for microfabrication by ion beam lithography plays an important role in the design of low energy ion beam transport and optical systems. Many computer packages currently available, are limited in their applications, being inaccurate or inappropriate for a number of practical purposes. This work describes an efficient and accurate computer programme which has been developed and tested for use on medium sized machines. The programme is written in Algol 68 and models the behaviour of a beam of charged particles through an electrostatic system. A variable grid finite difference method is used with a unique data structure, to calculate the electric potential in an axially symmetric region, for arbitrary shaped boundaries. Emphasis has been placed upon finding an economic method of solving the resulting set of sparse linear equations in the calculation of the electric field and several of these are described. Applications include individual ion lenses, extraction optics for ions in surface analytical instruments and the design of columns for ion beam lithography. Computational results have been compared with analytical calculations and with some data obtained from individual einzel lenses. (author)

  18. An effective description of dark matter and dark energy in the mildly non-linear regime

    Lewandowski, Matthew; Senatore, Leonardo [Stanford Institute for Theoretical Physics, Stanford University, Stanford, CA 94306 (United States); Maleknejad, Azadeh, E-mail: matthew.lewandowski@cea.fr, E-mail: azade@ipm.ir, E-mail: senatore@stanford.edu [School of Physics, Institute for Research in Fundamental Sciences (IPM), P. Code. 19538-33511, Tehran (Iran, Islamic Republic of)

    2017-05-01

    In the next few years, we are going to probe the low-redshift universe with unprecedented accuracy. Among the various fruits that this will bear, it will greatly improve our knowledge of the dynamics of dark energy, though for this there is a strong theoretical preference for a cosmological constant. We assume that dark energy is described by the so-called Effective Field Theory of Dark Energy, which assumes that dark energy is the Goldstone boson of time translations. Such a formalism makes it easy to ensure that our signatures are consistent with well-established principles of physics. Since most of the information resides at high wavenumbers, it is important to be able to make predictions at the highest wavenumber that is possible. The Effective Field Theory of Large-Scale Structure (EFTofLSS) is a theoretical framework that has allowed us to make accurate predictions in the mildly non-linear regime. In this paper, we derive the non-linear equations that extend the EFTofLSS to include the effect of dark energy both on the matter fields and on the biased tracers. For the specific case of clustering quintessence, we then perturbatively solve to cubic order the resulting non-linear equations and construct the one-loop power spectrum of the total density contrast.

  19. Constrained non-linear waves for offshore wind turbine design

    Rainey, P J; Camp, T R

    2007-01-01

    Advancements have been made in the modelling of extreme wave loading in the offshore environment. We give an overview of wave models used at present, and their relative merits. We describe a method for embedding existing non-linear solutions for large, regular wave kinematics into linear, irregular seas. Although similar methods have been used before, the new technique is shown to offer advances in computational practicality, repeatability, and accuracy. NewWave theory has been used to constrain the linear simulation, allowing best possible fit with the large non-linear wave. GH Bladed was used to compare the effect of these models on a generic 5 MW turbine mounted on a tripod support structure

  20. Accuracy Constraint Determination in Fixed-Point System Design

    Serizel R

    2008-01-01

    Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.

  1. Sensitivity analysis of linear programming problem through a recurrent neural network

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  2. Force Characteristics of the H-module Linear Actuator with Varying Tooth-shift-distance

    Liu, Xiao; Chen, Zhe; Lu, Kaiyuan

    2013-01-01

    The large normal force of a single-sided linear actuator may cause vibration, noise and reduce the positioning accuracy. To overcome these disadvantages, a new H-module linear actuator (HMLA) is proposed to reduce effectively the normal force without using expensive air suspension system...

  3. Genomic prediction based on data from three layer lines: a comparison between linear methods

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  4. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  5. What is the Best Model Specification and Earth Observation Product for Predicting Regional Grain Yields in Food Insecure Countries?

    Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.

    2017-12-01

    We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.

  6. Online correction of scanning probe microscopes with pixel accuracy

    Dirscherl, Kai

    2000-01-01

    of 10 nm and an opening angle of 30.0 °. Even atomic resolution can be achieved. The scan movement of the tip is not linear however. This is caused by the propelling device of the SPM for the scan motion - a piezoelectric ceramic. The two major non-linear responses o f the piezo to the applied control....... The algorithm typically contains 5 - 7 parameters which have to be calibrated manually. Still, non-linear errors remain in the order of 1-2%. One pixel in a 512x 512 image corresponds to 0.2% per direction. This goal of measurement accuracy i s reached with the algorithm developed in this thesis. Three...... different SPM are analyzed for their non-linearity. Two commercial tube sc anners are applied with a maximum scan range in x and y of 40.0 µm and 160.0 µm as well as one specially designed stack scanner with a maximum range of 5.0 µm. For the tube scanners, a 1-dimensional line pattern with a reference...

  7. Double generalized linear compound poisson models to insurance claims data

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  8. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A.; Caloba, L.P.; Mery, D.

    2004-01-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  9. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  10. Linear Programming and Network Flows

    Bazaraa, Mokhtar S; Sherali, Hanif D

    2011-01-01

    The authoritative guide to modeling and solving complex problems with linear programming-extensively revised, expanded, and updated The only book to treat both linear programming techniques and network flows under one cover, Linear Programming and Network Flows, Fourth Edition has been completely updated with the latest developments on the topic. This new edition continues to successfully emphasize modeling concepts, the design and analysis of algorithms, and implementation strategies for problems in a variety of fields, including industrial engineering, management science, operations research

  11. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    Debono, Josephine C; Poulos, Ann E; Houssami, Nehmat; Turner, Robin M; Boyages, John

    2015-01-01

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes

  12. Evaluation of radiographers’ mammography screen-reading accuracy in Australia

    Debono, Josephine C, E-mail: josephine.debono@bci.org.au [Westmead Breast Cancer Institute, Westmead, New South Wales (Australia); Poulos, Ann E [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, Lidcombe, New South Wales (Australia); Houssami, Nehmat [Screening and Test Evaluation Program, School of Public Health (A27), Sydney Medical School, University of Sydney, Sydney, New South Wales (Australia); Turner, Robin M [School of Public Health and Community Medicine, University of New South Wales, Sydney, New South Wales (Australia); Boyages, John [Macquarie University Cancer Institute, Macquarie University Hospital, Australian School of Advanced Medicine, Macquarie University, Sydney, New South Wales (Australia); Westmead Breast Cancer Institute, Westmead, New South Wales (Australia)

    2015-03-15

    This study aimed to evaluate the accuracy of radiographers’ screen-reading mammograms. Currently, radiologist workforce shortages may be compromising the BreastScreen Australia screening program goal to detect early breast cancer. The solution to a similar problem in the United Kingdom has successfully encouraged radiographers to take on the role as one of two screen-readers. Prior to consideration of this strategy in Australia, educational and experiential differences between radiographers in the United Kingdom and Australia emphasise the need for an investigation of Australian radiographers’ screen-reading accuracy. Ten radiographers employed by the Westmead Breast Cancer Institute with a range of radiographic (median = 28 years), mammographic (median = 13 years) and BreastScreen (median = 8 years) experience were recruited to blindly and independently screen-read an image test set of 500 mammograms, without formal training. The radiographers indicated the presence of an abnormality using BI-RADS®. Accuracy was determined by comparison with the gold standard of known outcomes of pathology results, interval matching and client 6-year follow-up. Individual sensitivity and specificity levels ranged between 76.0% and 92.0%, and 74.8% and 96.2% respectively. Pooled screen-reader accuracy across the radiographers estimated sensitivity as 82.2% and specificity as 89.5%. Areas under the reading operating characteristic curve ranged between 0.842 and 0.923. This sample of radiographers in an Australian setting have adequate accuracy levels when screen-reading mammograms. It is expected that with formal screen-reading training, accuracy levels will improve, and with support, radiographers have the potential to be one of the two screen-readers in the BreastScreen Australia program, contributing to timeliness and improved program outcomes.

  13. Accuracy of a heart failure diagnosis in administrative registers

    Kümler, Thomas; Gislason, Gunnar Hilmar; Kirk, Vibeke

    2008-01-01

    BACKGROUND: The incidence of heart failure is frequently reported using hospital discharge diagnoses. The specificity of a diagnosis has been shown to be high but the sensitivity of a reported diagnosis is unknown. PURPOSE: To study the accuracy of a heart failure diagnosis reported to the Danish...... a specificity of 99% and a sensitivity of 29% for all patients. The positive predictive value was 81%, the negative predictive value 90%. CONCLUSION: The diagnosis of Heart Failure in the Danish National Registers is underreported, but very specific....

  14. Elementary linear programming with applications

    Kolman, Bernard

    1995-01-01

    Linear programming finds the least expensive way to meet given needs with available resources. Its results are used in every area of engineering and commerce: agriculture, oil refining, banking, and air transport. Authors Kolman and Beck present the basic notions of linear programming and illustrate how they are used to solve important common problems. The software on the included disk leads students step-by-step through the calculations. The Second Edition is completely revised and provides additional review material on linear algebra as well as complete coverage of elementary linear program

  15. The art of linear electronics

    Hood, John Linsley

    2013-01-01

    The Art of Linear Electronics presents the principal aspects of linear electronics and techniques in linear electronic circuit design. The book provides a wide range of information on the elucidation of the methods and techniques in the design of linear electronic circuits. The text discusses such topics as electronic component symbols and circuit drawing; passive and active semiconductor components; DC and low frequency amplifiers; and the basic effects of feedback. Subjects on frequency response modifying circuits and filters; audio amplifiers; low frequency oscillators and waveform generato

  16. Linearity and Non-linearity of Photorefractive effect in Materials ...

    Linearity and Non-linearity of Photorefractive effect in Materials using the Band transport ... For low light beam intensities the change in the refractive index is ... field is spatially phase shifted by /2 relative to the interference fringe pattern, which ...

  17. The linear programming bound for binary linear codes

    Brouwer, A.E.

    1993-01-01

    Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.

  18. Linear operator inequalities for strongly stable weakly regular linear systems

    Curtain, RF

    2001-01-01

    We consider the question of the existence of solutions to certain linear operator inequalities (Lur'e equations) for strongly stable, weakly regular linear systems with generating operators A, B, C, 0. These operator inequalities are related to the spectral factorization of an associated Popov

  19. Analytical Ballistic Trajectories with Approximately Linear Drag

    Giliam J. P. de Carpentier

    2014-01-01

    Full Text Available This paper introduces a practical analytical approximation of projectile trajectories in 2D and 3D roughly based on a linear drag model and explores a variety of different planning algorithms for these trajectories. Although the trajectories are only approximate, they still capture many of the characteristics of a real projectile in free fall under the influence of an invariant wind, gravitational pull, and terminal velocity, while the required math for these trajectories and planners is still simple enough to efficiently run on almost all modern hardware devices. Together, these properties make the proposed approach particularly useful for real-time applications where accuracy and performance need to be carefully balanced, such as in computer games.

  20. Sfermion Precision Measurements at a Linear Collider

    Freitas, A.; Ananthanarayan, B.; Bartl, A.; Blair, G.A.; Blochinger, C.; Boos, E.; Brandenburg, A.; Datta, A.; Djouadi, A.; Fraas, H.; Guasch, J.; Hesselbach, S.; Hidaka, K.; Hollik, W.; Kernreiter, T.; Maniatis, M.; von Manteuffel, A.; Martyn, H.U.; Miller, D.J.; Moortgat-Pick, Gudrid A.; Muhlleitner, M.; Nauenberg, U.; Kluge, Hannelies; Porod, W.; Sola, J.; Sopczak, A.; Stahl, A.; Weber, M.M.; Zerwas, P.M.

    2002-01-01

    At future e+- e- linear colliders, the event rates and clean signals of scalar fermion production - in particular for the scalar leptons - allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected for the third generation sfermions opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan(beta) from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses.